The document defines and discusses differential equations and their solutions. It begins by classifying differential equations as ordinary or partial based on whether they involve one or more independent variables. Ordinary differential equations are then classified as linear or nonlinear based on their form. The order and degree of a differential equation are also defined.
Solutions to differential equations can be either explicit functions that directly satisfy the equation or implicit relations that define functions satisfying the equation. Picard's theorem guarantees a unique solution through each point for first-order equations. The general solution to a first-order equation is a one-parameter family of curves, with a particular solution corresponding to a specific value of the parameter. An initial value problem specifies both a differential equation and
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Ma 104 differential equations
1. Differential Equations
1 Introduction
1.1 Classification of Differential Equations
Definition 1.1 (Differential Equations). An equation involving one dependent variable
and its derivatives with respect to one or more independent variables is called a differential
equation.
Example 1.1. Following are examples of differential equations
d2y
dx2
+ xy
dy
dx
2
= 0 (1.1)
d4x
dt4
+ 5
d2x
dt2
+ 3x = sin t (1.2)
x2 d2y
dx2
+ x
dy
dx
+ (x2
− p2
)y = 0 (1.3)
x2 d2y
dx2
3
+ x
dy
dx
+ (x2
− p2
)y = 0. (1.4)
Definition 1.2 (Ordinary Differential Equation). An ordinary differential equation (ODE)
is one in which there is only one independent variable.
From the definition of ordinary differential equation, it follows that all the derivatives
occurring in it are ordinary derivatives.
Definition 1.3 (Partial Differential Equation). A differential equation involving partial
derivatives of the dependent variable with respect to more than one independent variable
is called a partial differential equation.
Example 1.2. The differential equations appearing in Example 1.1 are all instances of
ordinary differential equations. The differential equation
∂v
∂s
+
∂v
∂t
= v is an example of a
partial differential equation.
Note 1. In these lectures we shall discuss only ordinary differential equations, and so the
word ordinary will be dropped.
Definition 1.4 (Order of the Differential Equation). The order of the highest ordered
derivative involved in a differential equation is called the order of the differential equation.
Example 1.3. In Example 1.1, the order of the differential equations (1.1), (1.2), (1.3)
and (1.4) are respectively 2, 4, 2 and 2.
1
2. 1 INTRODUCTION 2
Definition 1.5 (Degree of the Differential Equation). If a differential equation has the
form of an algebraic equation of degree k in the highest derivative, then we say that the
given differential equation is of degree k.
Example 1.4. In Example 1.1, the degree of the differential equations (1.1), (1.2), (1.3)
and (1.4) are respectively 1, 1, 1 and 3.
Definition 1.6 (Linear and Nonlinear Ordinary Differential Equation). A linear ordinary
differential equation of order n, in the dependent variable y and the independent variable
x, is an equation that is in, or can be expressed in, the form
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = b(x),
where a0 is not identically zero.
A nonlinear ordinary differential equation is an ordinary differential equation that is not
linear.
Example 1.5. In Example 1.1, the ordinary differential equations (1.2) and (1.3) are
linear, but (1.1) is not. Moreover, the following ordinary differential equations are all
nonlinear:
d2y
dx2
+ 5
dy
dx
+ 6y2
= 0
d2y
dx2
+ 5
dy
dx
2
+ 6y = 0
d2y
dx2
+ 5y
dy
dx
+ 6y = 0.
Definition 1.7 (Homogeneous and Nonhomogeneous Linear ODE). Consider a linear
ordinary differential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x). (1.5)
If the function F(x) is not identically zero, then (1.5) is called a linear nonhomogeneous
ODE; otherwise equation is said to be linear homogeneous. Thus, a linear homogeneous
ODE is of the form
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0.
For a linear nonhomogeneous ODE
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x), (1.6)
the equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0
is called the homogeneous part of (1.6).
If the functions a0(x), a1(x), . . . , an(x) are constants, then (1.5) is called linear ODE with
constant coefficients. Similarly, if the functions a0(x), a1(x), . . . , an(x) are constants and
F(x) is zero, then (1.5) is called linear homogeneous ODE with constant coefficients.
3. 1 INTRODUCTION 3
1.2 Solutions of Differential Equations
Consider the nth-order ordinary differential equation
F x, y,
dy
dx
, . . . ,
dny
dxn
= 0, (1.7)
where F is a real function of its (n + 2) arguments x, y,
dy
dx
, . . . ,
dny
dxn
.
Definition 1.8 (Implicit and Explicit Solutions).
1. Let f be a real function defined for all x in a real interval I1 and having an nth
derivative (and hence also all lower ordered derivatives) for all x ∈ I. The function
f is called an explicit solution of the differential equation (1.7) on I if it fulfills the
following two requirements:
(a) F x, f(x), f (x), . . . , f(n)(x) is defined for all x ∈ I, and
(b) F x, f(x), f (x), . . . , f(n)(x) = 0 for all x ∈ I.
2. A relation g(x, y) = 0 is called an implicit solution of (1.7) if this relation defines at
least one real function f of the variable x on an interval I such that this function is
an explicit solution of (1.7) on this interval.
3. Both explicit solutions and implicit solutions will usually be called simply solutions.
Example 1.6.
1. The function f defined for all real x by f(x) = 2 sin x+3 cos x is an explicit solution
of the differential equation
d2y
dx2
+ y = 0 for all real x (verify).
2. The function y(x) = x tan(x + 3) is an explicit solution of the differential equation
x
dy
dx
= x2
+ y2
+ y in I = (−π
2 − 3, π
2 − 3) (verify).
Example 1.7. The relation
x2
+ y2
− 25 = 0 (1.8)
is an implicit solution of the differential equation
x + y
dy
dx
= 0 (1.9)
on the interval I defined by −5 < x < 5. For the relation (1.8) defines the two real
functions f1 and f2 given by f1(x) =
√
25 − x2, and f2(x) = −
√
25 − x2, respectively, for
all real x ∈ I, and both of these functions are explicit solutions of the differential equations
(1.9) on I.
1
Recall that an interval is a set of either of the following forms:
(a, b), (a, b], [a, b), [a, b], (a, ∞), [a, ∞), (−∞, b), (−∞, b], (−∞, ∞),
where a, b ∈ R.
4. 2 FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS 4
Example 1.8 (Formal Solution). Consider the relation
x2
+ y2
+ 25 = 0. (1.10)
Let us differentiate the relation (1.10) implicitly with respect to x. We obtain
2x + 2y
dy
dx
= 0 or
dy
dx
= −
x
y
.
Substituting this into the differential equation
x + y
dy
dx
= 0, (1.11)
we obtain the formal identity
x + y −
x
y
= 0.
Thus the relation (1.10) formally satisfies the differential equation (1.11). Can we conclude
from this alone that (1.10) is an implicit solution of (1.11)? The answer to this question
is “no,” for we have no assurance from this that the relation (1.10) defines any function
that is an explicit solution of (1.11) on any real interval I. All that we have shown is that
(1.10) is a relation between x and y that, upon implicit differentiation and substitution,
formally reduces the differential equation (1.11) to a formal identity. It is called a formal
solution.
2 First Order Ordinary Differential Equations
Consider the first order differential equation of the form
dy
dx
= f(x, y), (2.1)
where f(x, y) is a continuous function throughout some rectangle R in the xy plane.
The geometric meaning of a solution of (2.1) can be understood as follows (Fig. 1). If
P0 = (x0, y0) is a point in R, then the number
dy
dx P0
= f(x0, y0)
determines a direction at P0. Now let P1 = (x1, y1) be a point near P0 in this direction,
and use
dy
dx P1
= f(x1, y1)
to determine a new direction at P1. Next, let P2 = (x2, y2) be a point near P1 in this new
direction, and use the number
dy
dx P2
= f(x2, y2)
to determine yet another direction at P2. If we continue this process, we obtain a broken
line with points scattered along it like beads; if we now imagine that these successive points
5. 2 FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS 5
Figure 1:
move closer to one another and become more numerous, then the broken line approaches
a smooth curve through the initial point P0. This curve is a solution y = y(x) of equation
(2.1); for at each point (x, y) on it, the slope is given by f(x, y) precisely the condition
required by the differential equation. If we start with a different initial point, then in
general we obtain a different curve (or solution). Thus the solutions of (2.1) form a family
of curves, called integral curves. Furthermore, it appears to be a reasonable guess that
through each point in R there passes just one integral curve of (2.1). This discussion is
intended only to lend plausibility to the following precise statement.
Theorem 2.1 (Picard’s Theorem2). Let f(x, y) and
∂f
∂y
be continuous functions on a
closed rectangle R : {(x, y) : a ≤ x ≤ b, c ≤ y ≤ d}. Then through each point (x0, y0) in
the interior of R there passes a unique integral curve of the equation
dy
dx
= f(x, y).
If we consider a fixed value of x0 in this theorem, then the integral curve that passes
through (x0, y0) is fully determined by the choice of y0. In this way we see that the integral
curves of (2.1) constitute what is called a one-parameter family of curves. The equation
of this family can be written in the form
y = y(x, c) (2.2)
where different choices of the parameter c yield different curves in the family. The integral
curve that passes through (x0, y0) corresponds to the value of c for which y0 = y(x0, c). If
2
This theorem can be strengthened in various directions by weakening its hypotheses. We will discuss
it in Section 5.
6. 2 FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS 6
we denote this number by c0, then (2.2) is called the general solution of (2.1), and
y = y(x, c0) (2.3)
is called the particular solution that satisfies the condition
y = y0 when x = x0.
The essential feature of the general solution (2.2) is that the constant c in it can be
chosen so that an integral curve passes through any given point of the rectangle under
consideration.
Remark 2.1. As discussed above, the general solution of a first order ODE is a one-
parameter family of infinitely many solution curves, one for each value of the parameter c.
If we choose a specific c (e,g., c = 6.45 or 0 or 2.01) we obtain what is called a particular
solution of the ODE. A particular solution does not contain any arbitrary constants.
We also note that a first order ODE may sometimes have an additional solution that can
not be obtained from the general solution by assigning a value to the parameter. Such a
solution is called singular solution. For instance,
dy
dx
2
− x
dy
dx
+ y = 0 has the general
solution y = cx − c2. It also has a solution ys(x) = x2
4 that cannot be obtained from the
general solution by choosing specific values of c. Hence, the later solution is a singular
solution.
Remark 2.2. We shall develop methods that will give general solutions uniquely (perhaps
except for notation). Hence we shall say the general solution of a given ODE (instead of
a general solution).
2.1 Initial Value Problem
Definition 2.1 (Initial Value Problem). Consider the first-order differential equation
dy
dx
= f(x, y) (2.4)
where f is a function of x and y in some domain3 D of the xy plane; and let (x0, y0) be a
point of D. The initial-value problem (IVP) associated with (2.4) is to find a solution φ of
the differential equation (2.4), defined on some real interval containing x0, and satisfying
the initial condition
φ(x0) = y0.
In the customary abbreviated notation, this initial-value problem may be written
dy
dx
= f(x, y)
y(x0) = y0.
The condition y(x0) = y0 is called the supplementary condition of the IVP.
3
A domain is an open, connected set. For those unfamiliar with such concepts, D may be regarded as
the interior of some simple closed curve in the plane.
7. 2 FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS 7
Remark 2.3. For higher order ODE, the supplementary conditions are also applied on
the lower derivatives of the function. For example, the following is an instance of IVP for
second order ODE:
d2y
dx2
+ y = 0
y(0) = 0, y (0) = 1.
Observe that in the IVPs, the supplementary conditions relate to one x value. If the
conditions relate to two different x values, the problem is called a boundary-value problem.
For instance, the following is an example of boundary value problems:
d2y
dx2
+ y = 0
y(0) = 0, y(
π
2
) = 1.
Problem 2.1. Solve the initial-value problem
dy
dx
= −
x
y
(2.5)
y(3) = 4 (2.6)
given that the differential equation (2.5) has a one-parameter family of solutions which
may be written in the form
x2
+ y2
= c2
. (2.7)
Solution. The condition (2.6) means that we seek the solution of (2.5) such that y = 4 at
x = 3. Thus the pair of values (3, 4) must satisfy the relation (2.7). Substituting x = 3
and y = 4 into (2.7), we find 9 + 16 = c2, or c2 = 25. Now substituting this value of c2
into (2.7), we have x2 + y2 = 25. Solving this for y, we obtain y = ±
√
25 − x2.
Obviously the positive sign must be chosen to give the value +4 at x = 3. Thus the
function f defined by f(x) =
√
25 − x2, −5 < x < 5, is the solution of the problem. In
the usual abbreviated notation, we write this solution as y =
√
25 − x2.
Problem 2.2. Assuming that the differential equation
x
dy
dx
− 3y + 3 = 0 (2.8)
has a one-parameter family of solutions given by
y = 1 + cx3
,
solve the initial-value problems consisting of the differential equation (2.8) and the initial
conditions (a) y(0) = 0 (b) y(1) = 1, and (c) y(0) = 1.
Answer: No solution for (a), just one solution y(x) = 1 for (b), and infinite number of
solutions y(x) = 1 + cx3 for (c).
8. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 8
3 Solutions of First Order Equations
The first-order differential equations to be studied in this section may be expressed in
either the derivative form
dy
dx
= f(x, y) (3.1)
or the differential form
M(x, y)dx + N(x, y)dy = 0. (3.2)
An equation in one of these forms may readily be written in the other form.
Remark 3.1. In the form (3.1) it is clear from the notation itself that y is regarded as the
dependent variable and x as the independent one; but in the form (3.2) we may actually
regard either variable as the dependent one and the other as the independent. However,
in all differential equations of the form (3.2) in x and y, we shall regard y as dependent
and x as independent, unless the contrary is specifically stated.
3.1 Exact Differential Equations
Definition 3.1 (Total Differential). Let F be a function of two real variables such that
F has continuous first partial derivatives in a domain D. The total differential dF of the
function F is defined by the formula.
dF(x, y) =
∂F(x, y)
∂x
dx +
∂F(x, y)
∂y
dy
for all (x, y) ∈ D.
Definition 3.2 (Exact Differential Equation). A first order ODE of the form
M(x, y)dx + N(x, y)dy = 0 (3.3)
is called an exact differential equation in a domain D if there exits a function F(x, y) such
that
1. F has continuous first partial derivatives in the domain D, and
2.
∂F(x, y)
∂x
= M(x, y), and
∂F(x, y)
∂y
= N(x, y) for all (x, y) ∈ D.
[So in this case, equation (3.3) can be written as dF(x, y) = 0. Also note that in this case,
M and N will be continuous in D.]
Theorem 3.1 (Test for Exactness). Consider the differential equation
M(x, y)dx + N(x, y)dy = 0 (3.4)
where M and N have continuous first partial derivatives at all points (x, y) in a rectangular
domain R := {(x, y) : |x − x0| < a, |y − y0| < b}. Then (3.4) is exact in R if and only if
∂M
∂y
=
∂N
∂x
for all (x, y) ∈ R.
9. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 9
Proof. First we assume that the equation (3.4) is exact in R, and we prove that for all
(x, y) ∈ R,
∂M
∂y
=
∂N
∂x
.
Since the equation (3.4) is exact in R, there exists a function F such that
∂F(x, y)
∂x
= M(x, y) and
∂F(x, y)
∂y
= N(x, y)
for all (x, y) ∈ D. Then
∂2F(x, y)
∂x∂y
=
∂M(x, y)
∂y
and
∂2F(x, y)
∂y∂x
=
∂N(x, y)
∂x
for all (x, y) ∈ R. But, using the continuity of the first partial derivatives of M and N,
we have
∂2F(x, y)
∂x∂y
=
∂2F(x, y)
∂y∂x
and therefore
∂M(x, y)
∂y
=
∂N(x, y)
∂x
for all (x, y) ∈ R.
Conversely, we assume that
∂M
∂y
=
∂N
∂x
for all (x, y) ∈ R, and we prove that the
equation (3.4) is exact in R. We note that since M and N have continuous first partial
derivatives at all points (x, y) ∈ R, we obtain continuity of M and N in R. Thus, we just
need to prove that there exists a function F such that
∂F(x, y)
∂x
= M(x, y) (3.5)
and
∂F(x, y)
∂y
= N(x, y). (3.6)
for all (x, y) ∈ R. Let us assume that F satisfies (3.5) and proceed. Then
F(x, y) = M(x, y) ∂x + φ(y), (3.7)
where M(x, y) ∂x indicates a partial integration with respect to x, holding y constant,
and φ is an arbitrary function of y only. Differentiating (3.7) partially with respect to y,
we obtain
∂F(x, y)
∂y
=
∂
∂y
M(x, y) ∂x +
dφ(y)
dy
.
Now if (3.6) is to be satisfied, we must have
N(x, y) =
∂
∂y
M(x, y) ∂x +
dφ(y)
dy
, (3.8)
and hence
dφ(y)
dy
= N(x, y) −
∂
∂y
M(x, y) ∂x.
10. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 10
Since φ is a function of y only, the derivative
dφ(y)
dy
must also be independent of x. That
is, in order for (3.8) to hold,
N(x, y) −
∂
∂y
M(x, y) ∂x (3.9)
must be independent of x. We will prove it by showing that
∂
∂x
N(x, y) −
∂
∂y
M(x, y) ∂x = 0
for all (x, y) ∈ R. In fact,
∂
∂x
N(x, y) −
∂
∂y
M(x, y) ∂x =
∂N(x, y)
∂x
−
∂2
∂x∂y
M(x, y) ∂x
=
∂N(x, y)
∂x
−
∂2F(x, y)
∂x∂y
(assuming (3.5))
=
∂N(x, y)
∂x
−
∂2F(x, y)
∂y∂x
=
∂N(x, y)
∂x
−
∂2
∂y∂x
M(x, y) ∂x
=
∂N(x, y)
∂x
−
∂M(x, y)
∂y
= 0 for all (x, y) ∈ R.
Thus, we have shown that (3.9) is independent of x. Thus we may write
φ(y) = N(x, y) −
∂
∂y
M(x, y) ∂x dy.
Substituting this into Equation (3.7), we have
F(x, y) = M(x, y) ∂x + N(x, y) −
∂
∂y
M(x, y) ∂x dy.
This F(x, ) thus satisfies both (3.5) and (3.6) for all (x, y) ∈ R, and so Mdx + Ndy = 0 is
exact in D.
Example 3.1. Consider the equation
y2
dx + 2xydy = 0.
Here M = y2, N = 2xy. Since
∂M
∂y
= 2y =
∂N
∂x
for all (x, y), the equation is exact in
every rectangular domain D.
Example 3.2. Consider the equation
ydx + 2xdy = 0
Here M = y, N = 2x,
∂M
∂y
= 1 = 2 =
∂N
∂x
for all (x, y). Thus Equation is not exact in
any rectangular domain D.
11. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 11
Theorem 3.2 (Solution of Exact Equations). Consider the exact differential equation
M(x, y)dx + N(x, y)dy = 0 (3.10)
in a rectangular domain R := {(x, y) : |x − x0| < a, |y − y0| < b}. Then a one-parameter
family of solutions of this differential equation is given by
F(x, y) = c,
where c is an arbitrary constant, and F is a function such that
∂F(x, y)
∂x
= M(x, y), and
∂F(x, y)
∂y
= N(x, y) (3.11)
for all (x, y) ∈ R.
Proof. Since
∂F(x, y)
∂x
= M(x, y), and
∂F(x, y)
∂y
= N(x, y),
the equation can be written as
∂F(x, y)
∂x
dx +
∂F(x, y)
∂y
dy = 0,
or simply
dF(x, y) = 0.
The relation F(x, y) = c is obviously a solution of this, where c is an arbitrary constant.
Remark 3.2. Note that from the definition of exact differential equation, existence of a
F(x, y) satisfying (3.11) is guaranteed.
Problem 3.1. Solve the equation
(3x2
+ 4xy)dx + (2x2
+ 2y)dy = 0
Solution. Here, M = 3x2 + 4xy, N = 2x2 + 2y,
∂M
∂y
= 4x =
∂N
∂x
.
Since
∂M
∂y
=
∂N
∂x
for all real (x, y), the equation is exact in every rectangular domain D.
Thus we must have a F(x, y) such that
∂F(x, y)
∂x
= M(x, y) (3.12)
∂F(x, y)
∂y
= N(x, y) (3.13)
From (3.12), we obtain
F(x, y) = M(x, y)∂x + φ(y)
= (3x2
+ 4xy)∂x + φ(y)
= x3
+ 2x2
y + φ(y).
12. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 12
Therefore,
∂F(x, y)
∂y
= 2x2
+
dφ(y)
dy
, and hence using (3.13), we obtain
2x2
+
dφ(y)
dy
= 2x2
+ 2y
⇒
dφ(y)
dy
= 2y
⇒ φ(y) = y2
+ C1, where C1 is an arbitrary constant.
Thus, F(x, y) = x3 + 2x2y + y2 + C1.
Hence a one-parameter family of solution is F(x, y) = C2 or x3 + 2x2y + y2 + C1 = C2.
Taking C2 − C1 = C, we obtain a solution as,
x3
+ 2x2
y + y2
= C.
Problem 3.2. Solve the equation
(2x + sin x tan y)dx − (cos x sec2
y)dy = 0
Solution. Here, M = 2x + sin x tan y, N = − cos x sec2 y,
∂M
∂y
= sin x sec2
y =
∂N
∂x
.
Since
∂M
∂y
=
∂N
∂x
for all real (x, y), the equation is exact in every rectangular domain D.
Thus we must have a F(x, y) such that
∂F(x, y)
∂x
= M(x, y) (3.14)
∂F(x, y)
∂y
= N(x, y) (3.15)
From (3.14), we obtain
F(x, y) = M(x, y)∂x + φ(y)
= (2x + sin x tan y)∂x + φ(y)
= x2
− cos x tan y + φ(y).
Therefore,
∂F(x, y)
∂y
= − cos x sec2
y +
dφ(y)
dy
, and hence using (3.15), we obtain
− cos x sec2
y +
dφ(y)
dy
= − cos x sec2
y
⇒
dφ(y)
dy
= 0
⇒ φ(y) = C0, where C0 is an arbitrary constant.
Thus, F(x, y) = x2 − cos x tan y + C0.
Hence a one-parameter family of solution is F(x, y) = C1 or x2 − cos x tan y + C0 = C1.
Taking C1 − C0 = C, we obtain a solution as,
x2
− cos x tan y = C.
13. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 13
3.2 Reduction to Exact Equations: Integrating Factors
Consider the differential equation
ydx + 2xdy = 0.
One can easily verify that this equation is not exact. However, if we multiply this equation
by y, then the resultant equation
y2
dx + 2xydy = 0
is exact. Thus, we have the following definition.
Definition 3.3 (Integrating Factor). If the differential equation
M(x, y)dx + N(x, y)dy = 0 (3.16)
is not exact in a domain D but the differential equation
µ(x, y)M(x, y)dx + µ(x, y)N(x, y)dy = 0
is exact in D, then µ(x, y) is called an integrating factor of the differential equation.
Remark 3.3. If an equation M(x, y)dx + N(x, y)dy = 0 has an integrating factor, then
it has infinitely many integrating factors. In fact, if µ is an integrating factor, then kµ is
also an integrating factor, where k is a constant.
Remark 3.4. Multiplication of a nonexact differential equation by an integrating factor
thus transforms the nonexact equation into an exact one. This exact equation has the same
one-parameter family of solutions as the nonexact original. However, the multiplication
of the original equation by the integrating factor may result in either
1. the loss of (one or more) solutions of the original
2. the gain of (one or more) functions which are solutions of the “new” equation but
not of the original,
3. both of these phenomena.
Hence, whenever we transform a nonexact equation into an exact one by multiplication by
an integrating factor, we should check carefully to determine whether any solutions may
have been lost or gained.
3.2.1 Methods to Find Integrating Factors
Theorem 3.3. If (3.16) is such that
1
N
∂M
∂y
−
∂N
∂x
is a function of x alone, say f(x), then
µ = e f dx
is a function of x only and is an integrating factor for (3.16).
14. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 14
Proof. Need to show that the equation
e f dx
M(x, y)dx + e f dx
N(x, y)dy = 0
is exact. Complete the proof.
Theorem 3.4. If (3.16) is such that
−1
M
∂M
∂y
−
∂N
∂x
is a function of y alone, say g(y), then
µ = e g dy
is a function of y only and is an integrating factor for (3.16).
Problem 3.3. Solve (xy − 1)dx + (x2 − xy)dy = 0.
Solution. Here M = xy − 1, N = x2 − xy,
∂M
∂y
= x, and
∂N
∂x
= 2x − y.
Since
∂M
∂y
=
∂N
∂x
, given equation is not exact. But
1
N
∂M
∂y
−
∂N
∂x
=
1
x2 − xy
(y − x) = −
1
x
,
and hence e (− 1
x
dx)
= 1
x is an integrating factor. Multiplying by 1
x , we obtain exact
equation
(y −
1
x
)dx + (x − y)dy = 0. (3.17)
Now we can solve the exact equation (3.17) following the method given in Section 3.1.
But here we proceed as follows.
(y −
1
x
)dx + (x − y)dy = 0
⇒ (ydx − xdy) − (
1
x
dx + ydy) = 0
⇒ d(xy) − d(ln x +
y2
2
) = 0
⇒ d(xy − ln x −
y2
2
) = 0
⇒ xy − ln x −
y2
2
= c.
Remark 3.5. Sometimes it may be possible to find integrating factor by inspection. For
this, some known differential formulas are useful. Few of these are given below:
15. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 15
d
x
y
=
ydx − xdy
y2
d
y
x
=
xdy − ydx
x2
d (xy) = xdy + ydx
d ln
x
y
=
ydx − xdy
xy
Problem 3.4. Solve (2x2y + y)dx + xdy = 0
Solution. Obviously, we can write this as
(2x2
ydx + (ydx + xdy) = 0
⇒ 2x2
ydx + d(xy) = 0
Now if we divide this by xy, then the last term remains differential and the first term also
becomes differential:
2xdx +
d(xy)
xy
= 0
⇒ d(x2
ln(xy)) = 0
⇒ x2
+ ln(xy) = C
⇒ y =
1
x
eC−x2
In multiplying by the integrating factor 1
xy , we assumed that x = 0 and y = 0. We now
consider the solution y = 0. This is not a member of the one-parameter family of solutions
which we obtained. However, writing the original differential equation of the problem in
the derivative form
(2x2
y + y) + x
dy
dx
= 0,
it is obvious that y = 0 is a solution of the original equation. Therefore, the solutions of
the differential equation are
y =
1
x
eC−x2
(general solution)
y = 0 (singular solution).
Problem 3.5. Solve ydx + (x2y − x)dy = 0.
Solution.
ydx + (x2
y − x)dy = 0
⇒ x2
ydy − (xdy − ydx) = 0. (3.18)
The quantity in parentheses reminds the differential formula
d
y
x
=
xdy − ydx
x2
16. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 16
which suggests the integrating factor 1
x2 . Multiplying (3.18) by integrating factor 1
x2 , we
obtain
ydy −
xdy − ydx
x2
= 0
⇒ ydy − d
y
x
= 0.
Therefore, general solution is
1
2
y2
−
y
x
= c.
3.3 Separable Equations
Definition 3.4 (Seperable Equation). An equation of the form
F(x)G(y)dx + f(x)g(y)dy = 0 (3.19)
is called a separable equation.
A separable equation may not be exact, but we have the following theorem.
Theorem 3.5. A non-exact separable equation (3.19) has an integrating factor
1
G(y)f(x)
.
Moreover, a one-parameter family of solutions is given by
F(x)
f(x)
dx +
g(y)
G(y)
dy = C.
Proof. Obvious.
Remark 3.6. Since we obtained the separated exact equation from the nonexact equation
(3.19) by multiplying (3.19) by the integrating factor 1
G(y)f(x) , solutions may have been lost
or gained in this process (cf. Remark 3.4). We now consider this more carefully. Note that
we divide the original equation by the integrating factor 1
G(y)f(x) under the assumption
that neither f(x) nor G(y) is zero, and under this assumption, we proceeded to obtain the
one-parameter family of solutions. Now, we should investigate the possible loss or gain of
solutions that may have occurred in this formal process. In particular, regarding y as the
dependent variable as usual, we consider the situation that occurs if G(y) is zero. Writing
the original differential equation (3.19) in the derivative form
f(x)g(y)
dy
dx
+ F(x)G(y) = 0,
we immediately note that if y0 is any real number such that G(y0) = 0, then y = y0 is a
(constant) solution of the original differential equation. Thus we must find the solutions
y = y0 of the equation G(y) = 0 and determine whether any of these are solutions of the
original equation which were lost in the formal separation process.
17. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 17
Problem 3.6. Solve
x
dy
dx
= y + y2
.
Solution. Given equation is
(y + y2
)dx − xdy = 0.
The equation is separable, and 1
x(y+y2)
is an integrating factor. Multiplying by integrating
factor 1
x(y+y2)
(assuming x(y + y2) = 0), we obtain
1
x
dx −
1
y + y2
dy = 0
⇒
1
x
dx −
1
y
−
1
1 + y
dy = 0
⇒
1
x
dx −
1
y
−
1
1 + y
dy = c, c is an arbitrary constant
⇒ ln |x| − (ln |y| − ln |1 + y|) = c
⇒ ln |x| − (ln |y| − ln |1 + y|) = c
⇒ ln
x(1 + y)
y
= c.
Thus, we have the one-parameter family of solutions
ln
x(1 + y)
y
= c.
In multiplying by the integrating factor 1
x(y+y2)
in the separation process, we assumed that
x = 0 and (y + y2) = 0. We now consider the solution y = 0, and y = −1 of (y + y2) = 0.
These are not a member of the one-parameter family of solutions which we obtained.
However, writing the original differential equation of the problem in the derivative form
(y + y2
) − x
dy
dx
= 0,
it is obvious that y = 0, and y = −1 are solutions of the original equation. We conclude
that these are solutions which were lost in the separation process. Therefore, the solutions
of the differential equation are
ln
x(1 + y)
y
= c (general solution)
y = 0 (singular solution)
y = −1 (singular solution).
Problem 3.7. Solve xy3dx + (y + 1)e−xdy = 0.
18. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 18
Solution. The equation is separable, and 1
y3e−x is an integrating factor. Multiplying by
integrating factor 1
y3e−x (assuming y3 = 0), we obtain
x
e−x
dx +
y + 1
y3
dy = 0
⇒ xex
dx +
1
y2
+
1
y3
dy = 0
⇒ xex
dx +
1
y2
+
1
y3
dy dy = c
⇒ (x − 1)ex
−
1
y
+
1
2y2
= c
⇒ (x − 1)ex
=
1
y
+
1
2y2
+ c.
We obtain this general solution under the assumption that y3 = 0, that is y = 0. However,
writing the original differential equation of the problem in the derivative form
dy
dx
= −
xy3
(y + 1)e−x
,
it is obvious that y = 0 is a solution of the original equation, but it cannot be obtained
from the general solution for any value of the constant c. Therefore, the solutions of the
differential equation are
(x − 1)ex
=
1
y
+
1
2y2
+ c (general solution)
y = 0 (singular solution).
3.4 Reduction to Separable Equations
3.4.1 Homogeneous Equations
Definition 3.5 (Homogeneous Equation). The first-order differential equation M(x, y)dx+
N(x, y)dy = 0 is said to be homogeneous if it can be written in the derivative form as
dy
dx
= f
y
x
.
Example 3.3. The equation
(y + x2 + y2)dx − xdy = 0
is homogeneous as it can be written as
dy
dx
=
y + x2 + y2
x
=
y
x
±
x2 + y2
√
x2
=
y
x
± 1 +
y
x
2
19. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 19
Remark 3.7 (Test For Homogeneous Equation).
1. A function F(x, y) is called homogeneous of degree n if F(tx, ty) = tnF(x, y).
2. If M(x, y) and N(x, y) in the differential equation M(x, y)dx + N(x, y)dy = 0 are
both homogeneous of the same degree n, then the differential equation is homoge-
neous.
Example 3.4. Consider the equation
(y + x2 + y2)dx − xdy = 0
of Example 3.3. Here M(x, y) = y + x2 + y2, and N(x, y) = −x. Note that M(tx, ty) =
ty+ (tx)2 + (ty)2 = t(y+ x2 + y2) = tM(x, y), and N(tx, ty) = −tx = tN(x, y). Thus,
both M and N are homogeneous of the same degree 1, and hence the differential equation
is homogeneous.
Theorem 3.6. If
M(x, y)dx + N(x, y)dy = 0 (3.20)
is a homogeneous equation, then the change of variables
v =
y
x
transforms (3.20) into a separable equation in the variables v and x.
Proof. Since M(x, y)dx + N(x, y)dy = 0 is homogeneous, it may be written in the form
dy
dx
= f
y
x
. (3.21)
Let y = vx. Then
dy
dx
= v + x
dv
dx
and (3.21) becomes
v + x
dv
dx
= f(v)
or
[v − f(v)]dx + xdv = 0.
This equation is separable.
Problem 3.8. Solve
dy
dx
+
x
y
+ 2 = 0, y = 0, y(0) = 1.
Solution.
dy
dx
+
x
y
+ 2 = 0
⇒
dy
dx
= −
x + 2y
y
⇒
dy
dx
= −
x
y
− 2.
20. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 20
Therefore, given equation is homogeneous. Substituting v = y
x, we obtain
dy
dx
= v + x
dv
dx
,
and the given equation reduces to
v + x
dv
dx
= −
1
v
− 2
⇒ (v +
1
v
+ 2)dx + xdv = 0
⇒ (v +
1
v
+ 2)dx + xdv = 0
⇒
(v + 1)2
v
dx + xdv = 0
⇒
1
x
dx +
v
(v + 1)2
dv = 0
(multiplying by the integrating factor v
x(v+1)2 , assuming x(v + 1)2 = 0 )
⇒
1
x
dx +
1
(v + 1)
−
1
(v + 1)2
dv = 0
⇒
1
x
dx +
1
(v + 1)
−
1
(v + 1)2
dv = C, where C is an arbitrary constant
⇒ ln |x| + ln |v + 1| +
1
v + 1
= C
⇒ ln |x| + ln
y
x
+ 1 +
1
y
x + 1
= C (substituting back v = y
x )
⇒ ln |y + x| +
1
y
x + 1
= C
⇒ ln |y + x| +
x
y + x
= C
Therefore, general solution of the given differential equation is given by
ln |y + x| +
x
y + x
= C.
We obtain this general solution under the assumption that (v + 1)2 = 0, that is y = −x.
Note that y = −x is a solution of the given equation as in this case
dy
dx
= −1, and x
y +2 = 1.
This solution cannot be obtained from the general solution for any value of the constant
C. Therefore, the solutions of the differential equation are
ln |y + x| +
x
y + x
= C (general solution)
y = −x (singular solution).
Now using the initial condition y(0) = 1 in the general solution, we obtain
ln 1 + 0 = C
⇒ C = 0.
Hence solution of the given IVP is ln |y + x| + x
y+x = 0. Also note that since y = −x does
not satisfy the initial condition y(0) = 1, y = −x is NOT a solution of the given IVP.
21. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 21
Problem 3.9. Solve
dy
dx
+
x
y
+ 2 = 0, y = 0, y(1) = −1.
Solution. As shown above, solutions of the given equation are
ln |y + x| +
x
y + x
= C (general solution)
y = −x (singular solution).
Note that general solution does not satisfy the condition y(1) = −1 for any value of the
constant C. Moreover, the singular solution y = −x satisfies this condition. Therefore,
the solution of the given IVP is y = −x.
Problem 3.10. Solve
xy
dy
dx
= y2
+ 2x2
, y(1) = 2.
Solution.
xy
dy
dx
= y2
+ 2x2
⇒
dy
dx
=
y2 + 2x2
xy
(assuming x = 0, y = 0)
⇒
dy
dx
=
y
x
2
+ 2
y
x
Given equation is homogeneous. Substituting v = y
x , we obtain
dy
dx
= v + x
dv
dx
, and the
given equation becomes
v + x
dv
dx
=
v2 + 2
v
⇒ v −
v2 + 2
v
dx + xdv = 0
⇒ −
2
v
dx + xdv = 0
⇒ −
2
x
dx + vdv = 0
(multiplying by the integrating factor v
x, assuming x = 0)
⇒ vdv =
2
x
dx
⇒ vdv =
2
x
dx + C, where C is an arbitrary constant
⇒
v2
2
= ln x2
+ C
⇒
y2
2x2
= ln x2
+ C (substituting back v = y
x)
⇒ y2
= 2x2
(ln x2
+ C).
22. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 22
Therefore, general solution of the given differential equation is given by
y2
= 2x2
(ln x2
+ C).
Now initial condition y(1) = 2 gives
4 = 2(ln 1 + C)
⇒ C = 2.
Hence solution of the given IVP is y2 = 2x2(ln x2 + 2).
3.4.2 Substitution Method
Theorem 3.7. Consider the ODE
dy
dx
= F(ax + by + c), (3.22)
where b = 0. Then the change of variables
v = ax + by + c
transforms (3.22) into a separable equation in the variables v and x.
Remark 3.8. If b = 0, then (3.22) is already in separable form.
Problem 3.11. Solve
dy
dx
= (x + y)2
.
Solution. Let x + y = v. Then
dy
dx
=
dv
dx
− 1,
and thus given equation reduces to
dv
dx
= 1 + v2
⇒
1
1 + v2
dv = dx + c ( v2
+ 1 = 0)
⇒ tan−1
v = x + c
⇒ v = tan(x + c)
⇒ x + y = tan(x + c).
Theorem 3.8. Consider the differential equation of the form
dy
dx
=
ax + by + c
αx + βy + γ
, ab = 0, αβ = 0. (3.23)
1. Let a
α = b
β , that is the lines ax + by + c = 0, and αx + βy + γ = 0 intersects, say,
at the point (h, k). Then the substitution X = x − h, and Y = y − k transforms the
equation (3.23) to a homogeneous equation.
23. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 23
2. Let a
α = b
β , that is the lines ax + by + c = 0, and αx + βy + γ = 0 are parallel. Then
the substitution v = ax + by transforms the equation (3.23) to a separable equation.
Problem 3.12. Solve
dy
dx
=
2x − y + 1
x − 2y + 1
, x − 2y + 1 = 0.
Solution. The given equation is not homogeneous. The point of intersection of the lines
2x − y + 1 = 0, and x − 2y + 1 = 0 is (−1
3, 1
3). Let X = x + 1
3, and Y = y − 1
3. Then
dy
dx
=
dY
dX
. Therefore, under this substitution, given equation reduces to
dY
dX
=
2X − Y
X − 2Y
⇒
dY
dX
=
2 − Y
X
1 − 2Y
X
,
which is a homogeneous equation. Substituting v = Y
X , we obtain
dY
dX
= v + X
dv
dX
, and
the given equation reduces to
v + X
dv
dX
=
v2 + 2
v
⇒ −
2(v2 − v + 1)
1 − 2v
dX + Xdv = 0
⇒ −
2
X
dX +
1 − 2v
(v2 − v + 1)
dv = 0
(multiplying by the integrating factor 1−2v
X(v2−v+1)
as v2 − v + 1 = 0)
⇒ −
2
X
dX +
1 − 2v
(v2 − v + 1)
dv = C1, where C1 is an arbitrary constant
⇒ −2 ln |X| − ln |v2
− v + 1| = C1
⇒ ln X2
+ ln |v2
− v + 1| = C2, where C2 = −C1
⇒ ln |X2
(v2
− v + 1)| = C2
⇒ ln X2 Y 2
X2
−
Y
X
+ 1 = C2
⇒ ln Y 2
− XY + X2
= C2
⇒ Y 2
− XY + X2
= C3, where C3 = eC2
⇒ Y 2
− XY + X2
= C3 ( Y 2
− XY + X2
≥ 0)
⇒ (y −
1
3
)2
− (x +
1
3
)(y −
1
3
) + (x +
1
3
)2
= C3.
Problem 3.13. Solve
dy
dx
=
x − y + 5
2x − 2y − 2
, x − y − 1 = 0.
Solution. The given equation is not homogeneous. Moreover, the lines x − y + 5 = 0,
and 2x − 2y − 2 = 0 are parallel. Therefore, we use the transformation v = x − y. Then
24. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 24
dy
dx
= 1 −
dv
dx
. Therefore, given equation becomes
1 −
dv
dx
=
v + 5
2v − 2
⇒
dv
dx
= 1 −
v + 5
2v − 2
⇒
dv
dx
=
v − 7
2(v − 1)
⇒
v − 1
v − 7
dv =
1
2
dx
(multiplying by the integrating factor (v−1)
v−7 , assuming v − 7 = 0)
⇒ 1 +
6
v − 7
dv =
1
2
dx
⇒ 1 +
6
v − 7
dv =
1
2
dx + C
⇒ v + 6 ln |v − 7| =
x
2
+ C
⇒ (x − y) + 6 ln |x − y − 7| =
x
2
+ C
⇒
x
2
− y + 6 ln |x − y − 7| = C.
Therefore, the general solution is
x
2
− y + 6 ln |x − y − 7| = C (3.24)
We obtain this general solution under the assumption that v −7 = 0, that is x−y −7 = 0.
Note that x − y − 7 = 0 is a solution of the given equation as in this case
dy
dx
= 1, and
x−y+5
2x−2y−2 = 7+5
14−2 = 1. This solution cannot be obtained from the general solution (3.24)
for any value of the constant C. Therefore, the solutions of the differential equation are
x
2
− y + 6 ln |x − y − 7| = C (general solution)
y − x = 7 (singular solution).
Theorem 3.9. An ODE of the form
dy
dx
=
y
x
+ g(x)h
y
x
can be reduced to the separable form by substituting v = y
x.
3.5 Linear Differential Equations
Definition 3.6 (First Order Linear Differential Equation). A first-order ordinary differ-
ential equation is linear in the dependent variable y and the independent variable x if it
is, or can be, written in the form
a0(x)
dy
dx
+ a1(x)y = b(x),
25. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 25
where a0 is not identically zero.
A first-order ordinary differential equation can be put in the form
dy
dx
+ P(x)y = Q(x).
which is called the standard form.
Example 3.5. The equation
x
dy
dx
+ (x + 1)y = x3
is a first-order linear differential equation. The standard form of this differential equation
is
dy
dx
+ 1 +
1
x
y = x2
.
Theorem 3.10. The linear differential equation
dy
dx
+ P(x)y = Q(x). (3.25)
has an integrating factor of the form
e P(x) dx
.
A one-parameter family of solutions of this equation is
ye P(x) dx
= e P(x) dx
Q(x) dx + C,
that is
y = e− P(x) dx
e P(x) dx
Q(x) dx + C . (3.26)
Proof. Linear equation (3.25) can be written as
(P(x)y − Q(x))dx + dy = 0 (3.27)
Here M = P(x)y − Q(x) and N = 1. Now
1
N
∂M
∂y
−
∂N
∂x
= P(x).
Hence, µ(x) = e P(x) dx is an integrating factor. Multiplying equation (3.27) by µ(x) =
e P(x) dx, we obtain
e P(x) dx
P(x)ydx − Q(x)e P(x) dx
dx + e P(x) dx
dy = 0
⇒ d ye P(x) dx
= Q(x)e P(x) dx
dx
⇒ d ye P(x) dx
= Q(x)e P(x) dx
dx + C
⇒ ye P(x) dx
= Q(x)e P(x) dx
dx + C
⇒ y = e− P(x) dx
e P(x) dx
Q(x) dx + C .
26. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 26
Remark 3.9. It can be shown that the one-parameter family of solutions (3.26) of the
linear equation (3.25) includes all solutions of (3.25).
Method:
1. Put in standard linear form
dy
dx
+ P(x)y = Q(x).
2. Find integrating factor u(x) = e P(x) dx.
3. Multiply both sides by e P(x) dx, and put the resultant equation in the form
d
dx
ye P(x) dx
= e P(x) dx
Q(x)
4. Integrate.
Example 3.6. Solve x
dy
dx
− y = x3
.
Solution.
1. Given equation in standard linear form is
dy
dx
−
1
x
y = x2
.
2. Integrating factor is u(x) = e
−1
x
dx
= 1
x .
3. Multiplying both sides of the equation by 1
x, we obtain
1
x
dy
dx
−
1
x2
y = x
⇒
d
dx
y
x
= x.
4. Integrating
y
x
= x dx + C
⇒
y
x
=
x2
2
+ C
⇒ y =
x3
2
+ Cx.
Example 3.7. Solve (1 + cos x)
dy
dx
− sin xy = 2x.
Solution.
1. Given equation in standard linear form is
dy
dx
−
sin x
1 + cos x
y =
2x
1 + cos x
.
2. Integrating factor is u(x) = e − sin x
1+cos x
dx
= 1 + cos x.
27. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 27
3. Multiplying both sides of the equation by 1 + cos x, we obtain
(1 + cos x)
dy
dx
− sin xy = 2x
⇒
d
dx
(1 + cos x)y = 2x.
4. Integrating
(1 + cos x)y = 2x dx + C
⇒ (1 + cos x)y = x2
+ C.
Example 3.8. Solve
dy
dx
+ 2xy = 2x.
Solution. Integrating factor is e 2x dx = ex2
. Hence,
yex2
= 2xex2
dx + C
= ex2
+ C
⇒ y = 1 + Ce−x2
.
Remark 3.10. As mentioned in Remark 3.1, in the differential equation M(x, y)dx +
N(x, y)dy = 0, we may regard either variable as the dependent one and the other as the
independent, and we usually treat y as dependent and x as independent. In trying to solve
first order ODE, it is sometimes helpful to reverse the role of x and y, and work on the
resulting equations. Hence, the resulting equation
dx
dy
+ P(y)x = Q(y)
is also a linear equation.
Problem 3.14. Solve (4y3 − 2xy)
dy
dx
= y2
, y(2) = 1.
Solution. We write the given equation as
dx
dy
+
2
y
x = 4y.
An integrating factor is given by e
2
y
dy
= y2. Therefore,
xy2
= 4y3
dy + C
= y4
+ C.
Using the initial condition y(2) = 1, we obtain C = 2 − 1 = 1. Therefore we obtain
solution as
xy2
= y4
+ 1.
28. 3 SOLUTIONS OF FIRST ORDER EQUATIONS 28
3.5.1 Bernoulli’s Equation
Definition 3.7. An equation of the form
dy
dx
+ P(x)y = Q(x)yα
,
where α is a real number, is called a Bernoulli differential equation.
if α = 0 or 1, then the Bernoulli equation is actually a linear equation. The following
theorem gives a method of solution in the general case.
Theorem 3.11. Suppose α = 0, and α = 1. Then the transformation v = y1−α reduces
the Bernoulli equation
dy
dx
+ P(x)y = Q(x)yα
, (3.28)
to a linear equation in v.
Proof. We first multiply Equation (3.28) by y−α, thereby expressing it in the equivalent
form
y−α dy
dx
+ P(x)y1−α
= Q(x). (3.29)
If we let v = y1−α, then
dv
dx
= (1 − α)y−α dy
dx
and Equation (3.29) transforms into
1
1 − α
dv
dx
+ P(x)v = Q(x)
or, equivalently,
dv
dx
+ (1 − α)P(x)v = (1 − α)Q(x).
Letting
P1(x) = (1 − α)P(x) and Q1(x) = (1 − α)Q(x)
this may be written
dv
dx
+ P1(x)v = Q1(x)
which is linear in v.
Problem 3.15. Solve
dy
dx
−
y
x
= y3
.
Solution. We first multiply the given equation by y−3 (assuming y = 0), to obtain
y−3 dy
dx
−
y−2
x
= 1. (3.30)
If v = y−2, then we obtain
dv
dx
= −2y−3 dy
dx
.
29. 4 FAMILIES OF CURVES AND ORTHOGONAL TRAJECTORIES 29
Substituting in (3.30), we obtain
−
1
2
dv
dx
−
v
x
= 1
⇒
dv
dx
+
2
x
v = −2.
An integrating factor of this linear equation is given by e
2
x
dx
= x2.
Thus, we obtain
vx2
= −2 x2
dx + C0
⇒ y−2
x2
= −
2x3
3
+ C0
⇒ 3
x2
y2
+ 2x3
= C (C = 3C0)
We obtain this general solution under the assumption that y = 0. Note that y = 0 is
a solution of the given equation and this solution cannot be obtained from the general
solution for any value of the constant C. Therefore, the solutions of the differential
equation are
3
x2
y2
+ 2x3
= C (general solution)
y = 0 (singular solution).
4 Families of Curves and Orthogonal Trajectories
4.1 Families of Curves and Corresponding Differential Equations
We have seen that the general solution of a first order differential equation normally
contains one arbitrary constant, called a parameter. When this parameter is assigned
various values, we obtain a one-parameter family of curves. Each of these curves is a
particular solution, or integral curve, of the given differential equation, and all of them
together constitute its general solution.
Conversely, the curves of any one-parameter family are integral curves of some first order
differential equation. If the family is
f(x, y, c) = 0, (4.1)
then its differential equation can be found by the following steps.
Step 1: Differentiate (4.1) implicitly with respect to x to get a relation of the form
g x, y,
dy
dx
, c = 0. (4.2)
30. 4 FAMILIES OF CURVES AND ORTHOGONAL TRAJECTORIES 30
Step 2: Eliminate the parameter c from (4.1) and (4.2) to obtain
F x, y,
dy
dx
= 0
as the desired differential equation.
Problem 4.1. Consider a one-parameter family of curves
x2
+ y2
= c2
. (4.3)
Determine a differential equation F x, y,
dy
dx
= 0 such that curves of the family (4.3)
are integral curves of the differential equation.
Solution.
Step 1: Differentiating (4.3) with respect to x, we obtain
2x + 2y
dy
dx
= 0. (4.4)
Step 2: Since c is already absent, there is no need to eliminate it and
x + y
dy
dx
= 0 (4.5)
is the differential equation of the given family of circles.
Problem 4.2. Consider a one-parameter family of curves
x2
+ y2
= 2cx. (4.6)
Determine a differential equation F x, y,
dy
dx
= 0 such that curves of the family (4.6)
are integral curves of the differential equation.
Solution.
Step 1: Differentiating (4.6) with respect to x, we obtain
2x + 2y
dy
dx
= 2c
⇒ x + y
dy
dx
= c (4.7)
Step 2: Eliminating c from (4.6) and (4.7), we obtain
x + y
dy
dx
=
x2 + y2
2x
⇒
dy
dx
=
y2 − x2
2xy
.
Thus,
dy
dx
=
y2 − x2
2xy
is the differential equation of the given family of curves.
31. 4 FAMILIES OF CURVES AND ORTHOGONAL TRAJECTORIES 31
4.2 Orthogonal Trajectories
Definition 4.1 (Orthogonal Trajectories). Let
F(x, y, c) = 0 (4.8)
be a given one-parameter family of curves in the xy plane. A curve that intersects the
curves of the family (4.8) at right angles is called an orthogonal trajectory of the given
family.
If we have two families of curves such that each curve in either family is orthogonal to every
curve in the other family, then each family of curves is said to be a family of orthogonal
trajectories of the other.
Figure 2:
Example 4.1. Consider the family of circles
x2
+ y2
= c2
(4.9)
with center at the origin and radius c (cf. Figure 2). The family of circles represented by
(4.9) and the family y = kx of straight lines through the origin (the dotted lines in Figure
2) are orthogonal trajectories of each other.
4.2.1 How to Find Orthogonal Trajectories
To find the orthogonal trajectories of the family
f(x, y, c) = 0, (4.10)
32. 4 FAMILIES OF CURVES AND ORTHOGONAL TRAJECTORIES 32
Slope = dy/dx
Slope =−1/dy/dx
Figure 3: Orthogonal trajectories
we proceed as follows:
Step 1: Differentiate (4.10) implicitly with respect to x to get a relation of the form
g x, y,
dy
dx
, c = 0. (4.11)
Step 2: Eliminate the parameter c from (4.10) and (4.11) to obtain the differential equa-
tion
F x, y,
dy
dx
= 0 (4.12)
corresponding to the first family (4.10).
Step 3: Replace
dy
dx
by − 1
dy
dx
in (4.12) to obtain the differential equation
H x, y,
dy
dx
= 0 (4.13)
of the orthogonal trajectories (cf. Figure 3).
Step 4: General solution of (4.13) gives the required orthogonal trajectories.
Problem 4.3. Find the orthogonal trajectories of family of straight lines through the
origin.
Solution. The family of straight lines through the origin is given by
y = kx. (4.14)
To find the orthogonal trajectories, we proceed as follows.
33. 4 FAMILIES OF CURVES AND ORTHOGONAL TRAJECTORIES 33
Step 1: Differentiating (4.14) with respect to x, we obtain
dy
dx
= k. (4.15)
Step 2: Eliminating k from (4.14) and (4.15), we obtain
dy
dx
=
y
x
. (4.16)
This gives the differential equation of the family (4.14).
Step 3: Replacing
dy
dx
by − 1
dy
dx
in (4.16), we obtain
dy
dx
= −
x
y
. (4.17)
This gives the differential equation of the orthogonal trajectories.
Step 4: Solving differential equation (4.17), we obtain
x2
+ y2
= C. (4.18)
Thus the orthogonal trajectories of family of straight lines through the origin is given by
(4.18). Note that (4.18) is the family of circles with centre at the origin.
4.3 Oblique Trajectories
Slope=
Slope=
m
m
1
2
α
Figure 4: Oblique trajectories
Definition 4.2. Let
f(x, y, c) = 0 (4.19)
be a one-parameter family of curves. A curve that intersects the curves of the family
(4.19) at a constant angle α = π
2 is called an oblique trajectory of the given family.
34. 4 FAMILIES OF CURVES AND ORTHOGONAL TRAJECTORIES 34
Let m1 and m2 be the the slope of the curves of the family (4.19) and the slope of the
oblique trajectories of the family (4.19) respectively. Then, we have
± tan α =
m1 − m2
1 + m1m2
⇒ m2 =
m1 tan α
1 ± m1 tan α
. (4.20)
Recall that to obtain the orthogonal trajectories, we were replacing
dy
dx
in the differential
equation of the given family of curves by − 1
dy
dx
. But to obtain oblique trajectories, due to
(4.20), we need to replace
dy
dx
in the differential equation of the given family of curves by
dy
dx
tan α
1 ±
dy
dx
tan α
. (4.21)
Thus, we have the following procedure to obtain oblique trajectories intersecting the given
family of curves (4.19) at the constant angle α = π
2 .
Step 1: Differentiate (4.19) implicitly with respect to x to get a relation of the form
g x, y,
dy
dx
, c = 0. (4.22)
Step 2: Eliminate the parameter c from (4.19) and (4.22) to obtain the differential equa-
tion
F x, y,
dy
dx
= 0 (4.23)
corresponding to the first family (4.19).
Step 3: Replace
dy
dx
by
dy
dx
+ tan α
1 −
dy
dx
tan α
,
to obtain the differential equations
H x, y,
dy
dx
= 0. (4.24)
of the oblique trajectories.
Step 4: General solution of (4.24) gives the required oblique trajectories.
35. 4 FAMILIES OF CURVES AND ORTHOGONAL TRAJECTORIES 35
Remark 4.1. If we replace
dy
dx
by
dy
dx
− tan α
1 +
dy
dx
tan α
,
in (4.23) in Step 3, and obtain
H1 x, y,
dy
dx
= 0, (4.25)
then the general solution of (4.25) also gives a family of oblique trajectories.
Problem 4.4. Find the oblique trajectories that intersects the family
y = x + c (4.26)
at an angle of 60◦.
Solution.
Step 1: Differentiating (4.26) with respect to x, we obtain
dy
dx
= 1.
Step 2: Since c is already absent, there is no need to eliminate it and
dy
dx
= 1 (4.27)
is the differential equation of the family (4.26).
Step 3: Replacing
dy
dx
by
dy
dx
− tan α
1 +
dy
dx
tan α
i.e.,
dy
dx
−
√
3
1 +
dy
dx
√
3
,
in (4.27) to obtain
dy
dx
=
1 +
√
3
1 −
√
3
. (4.28)
This gives the differential equation of the oblique trajectories.
36. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 36
Step 4: Solving differential equation (4.28), we obtain
y =
1 +
√
3
1 −
√
3
x + C. (4.29)
Thus (4.29) gives a family of oblique trajectories that intersects the given family of curves
at an angle of 60◦.
Remark 4.2. If we replace
dy
dx
by
dy
dx
+ tan α
1 −
dy
dx
tan α
i.e.,
dy
dx
+
√
3
1 −
dy
dx
√
3
,
in (4.27), then we obtain
dy
dx
=
1 −
√
3
1 +
√
3
. (4.30)
The general solution of this equation is
y =
1 −
√
3
1 +
√
3
x + C. (4.31)
This is another family of oblique trajectories that intersects the given family of curves at
an angle of 60◦.
5 Picard’s Existence and Uniqueness Theorem
Example 5.1 (IVP with no solution). The IVP
|y | + |y| = 0, y(0) = 1
has no solution because y = 0 (that is, y(x) = 0 for all x) is the only solution of the ODE.
One another example of an IVP which does not have any solution is given by
dy
dx
=
0 if x ≤ 0
1 if x > 0
y(0) = 0.
Example 5.2 (IVP with many solutions). The IVP
x
dy
dx
= y − 1, y(0) = 1
has infinitely many solutions, namely, y = 1+cx, where c is an arbitrary constant because
y(0) = 1 for all c.
37. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 37
Example 5.3 (IVP with many solutions). The IVP
dy
dx
= y
1
3 , y(0) = 0
has infinitely many solutions, as
y =
0 if x ≤ c
2
3(x − c)
3
2
if x ≥ c
is a solution of the stated problem for every real number c ≥ 0.
Example 5.4 (IVP with unique solution). The IVP
dy
dx
= 2x, y(0) = 1
has precisely one solution, namely, y = x2 + 1.
Example 5.5 (IVP with unique solution). The IVP
dy
dx
= 2x, y(0) = 1
has precisely one solution, namely, y = x2 + 1.
Example 5.6. The first-order differential equation
x
dy
dx
− 3y + 3 = 0.
has no solution satisfying the initial condition y(0) = 0, just one solution y(x) = 1 satis-
fying y(1) = 1, and an infinite number of solutions y(x) = 1 + cx3 satisfying y(0) = 1.
The above examples motivate us raise the following questions on the solution of the first
order IVP
dy
dx
= f(x, y)
y(x0) = y0.
(5.1)
1. Under what conditions, there exists a solution to (5.1).
2. Under what conditions, there exists a unique solution to (5.1).
5.1 Lipschitz Condition
Definition 5.1. Let f be defined on D, where D is either a domain or closure of a domain
of the xy plane. The function f is said to satisfy a Lipschitz condition (with respect to y)
in D if there exists a constant K > 0 such that
|f(x, y1) − f(x, y2)| ≤ K|y1 − y2| (5.2)
for every (x, y1) and (x, y2) which belong to D. The constant K is called a Lipschitz
constant.
38. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 38
Remark 5.1. If f satisfies condition (5.2) in D, then for each fixed x the resulting function
of y is a continuous function of y for (x, y) belonging to D. Note, however, that condition
(5.2) implies nothing at all concerning the continuity of f with respect to x. For example,
the function f defined by
f(x, y) = y + [x],
where [x] denotes the greatest integer less than or equal to x, satisfies a Lipschitz condition
in any bounded domain D. For each fixed x, the resulting function of y is continuous.
However, this function f is discontinuous with respect to x for every integral value of x.
The following result is very useful to prove that a given function satisfies a Lipschitz
condition.
Theorem 5.1. Let f be such that
∂f
∂y
exists and is bounded for all (x, y) ∈ D, where D
is a domain or closure of a domain such that the line segment joining any two points of
D lies entirely within D. Then f satisfies a Lipschitz condition (with respect to y) in D.
Proof. Since
∂f
∂y
is bounded in D, there exists a K > 0 such that
∂f(x, y)
∂y
≤ K for all (x, y) ∈ D. (5.3)
We claim that
|f(x0, y1) − f(x0, y2)| ≤ K|y1 − y2| for all (x0, y1), (x0, y2) ∈ D. (5.4)
Consider the points (x0, y1), (x0, y2) ∈ D, and without loss of generality, we assume that
y1 < y2. Note that since the line joining any two points of D lies entirely in D, we have
for all y ∈ [y1, y2], (x0, y) ∈ D. Now consider the function g : [y1, y2] → R defined by
g(y) = f(x0, y).
Note that g is differentiable in [y1, y2], and
dg(y0)
dy
=
∂f(x0, y0)
∂y
for all y0 ∈ [y1, y2]. (5.5)
Using mean value theorem on g, we obtain a ξ ∈ (y1, y2) such that
|g(y1) − g(y2)| = |y1 − y2|
dg(ξ)
dy
⇒ |f(x0, y1) − f(x0, y2)| = |y1 − y2|
∂f(x0, ξ)
∂y
(by (5.5))
⇒ |f(x0, y1) − f(x0, y2)| ≤ K|y1 − y2| (by (5.3)).
This completes the proof.
Remark 5.2. A function f(x, y) may satisfy a Lipschitz condition but
∂f
∂y
may not exists.
For example f(x, y) = x2|y|, |x| ≤ 1, |y| ≤ 1 satisfy a Lipschitz condition with respect to
y but
∂f
∂y
does not exist at (x, 0), x = 0 (prove it!).
39. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 39
Problem 5.1. Show that each of the functions defined as follows satisfies a Lipschitz
condition in the rectangle D defined by D = {(x, y) : |x| ≤ a, |y| ≤ b}.
1. f(x, y) = y2.
2. f(x, y) = x sin y + y cos x.
3. f(x, y) = A(x)y2 + B(x)y + C(x), where A, B and C are continuous on |x| ≤ a.
Solution. (1): Here
∂f
∂y
= 2y, and hence
∂f
∂y
= |2y| ≤ 2b, for all (x, y) ∈ D.
Therefore, by Theorem 5.1, f satisfies a Lipschitz condition.
(2): Here
∂f
∂y
= x cos y + cos x, and hence
∂f
∂y
= |x cos y + cos x|
≤ |x cos y| + | cos x|
≤ |x|| cos y| + 1
≤ |x| + 1
≤ a + 1
for all (x, y) ∈ D. Therefore, by Theorem 5.1, f satisfies a Lipschitz condition.
(3): Since A(x), B(x) are continuous in |x| ≤ a, A and B are also bounded in |x| ≤ a.
Therefore, there exists positive numbers K1 and K2 such that
|A(x)| ≤ K1, and |B(x)| ≤ K2 for all x such that |x| ≤ a.
Therefore, we have
∂f
∂y
= |2Ay + B|
≤ 2K1|y| + K2
≤ 2K1b + K2
for all (x, y) ∈ D. Therefore, by Theorem 5.1, f satisfies a Lipschitz condition.
Problem 5.2. Show that the function f(x, y) = y2 does not satisfy a Lipschitz condition
in the domain D := {(x, y) : |x| ≤ a, − ∞ < y < ∞}.
Solution. If f(x, y) satisfies a Lipschitz condition in D, then we must have a K such that
for all (x, y1), (x, y2) ∈ D
|f(x, y1) − f(x, y2)| ≤ K|y1 − y2|
⇒
|f(x, y1) − f(x, y2)|
|y1 − y2|
≤ K. (5.6)
40. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 40
Let |x0| ≤ a. Then (x0, 0) ∈ D. Then from (5.6), for all y ∈ (−∞, ∞), we must have
|f(x0, 0) − f(x0, y)|
|0 − y|
= |y| ≤ K.
This is not possible as y ∈ (−∞, ∞).
Problem 5.3. Show that the function f(x, y) = y
2
3 does not satisfy a Lipschitz condition
throughout any domain which includes the line y = 0.
Solution. Let D be any domain which includes the line y = 0.
If f(x, y) satisfies a Lipschitz condition in D, then we must have a constant K such that
|f(x, y1) − f(x, y2)| ≤ K|y1 − y2| for all (x, y1), (x, y2) ∈ D.
That is,
|f(x, y1) − f(x, y2)|
|y1 − y2|
≤ K for all (x, y1), (x, y2) ∈ D, y1 = y2. (5.7)
Therefore, as (x0, 0) ∈ D, where x0 is any fixed real number, it follows that if f(x, y)
satisfies a Lipschitz condition in D, then we must have (taking x = x0 and y1 = 0 in (5.7))
y
2
3
2
|y2|
=
1
y
1
3
2
≤ K for all (x0, y2) ∈ D, y2 = 0. (5.8)
Now, since D is an open set and (x0, 0) ∈ D, there exists a r > 0 such that the line segment
L := {(x0, y) : 0 ≤ y ≤ r} ⊆ D. Note that (5.8) does not hold for all (x0, y) ∈ L ⊆ D
as 1
y
1
3
2
→ ∞ as y2 → 0 along the line segment L. Therefore, f(x, y) does not satisfy a
Lipschitz condition in D.
5.2 Existence and Uniqueness Theorem
Figure 5: Rectangle R in the existence and uniqueness theorems
41. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 41
Theorem 5.2 (Existence). Let f(x, y) be continuous at all points (x, y) in some closed
rectangle
R : {(x, y) : |x − x0| ≤ a, |y − y0| ≤ b}, (a, b > 0).
Since f is continuous in a closed and bounded domain, it is necessarily bounded in R; that
is, there is a number K such that
|f(x, y)| ≤ K for all (x, y) ∈ R.
Then the IVP (5.1) has at least one solution y = y(x). This solution exists at least for all
x in the subinterval |x − x0| ≤ β of the interval |x − x0| ≤ a, where
β = min a,
b
K
.
(Note that the solution exists possibly in a smaller interval)
Theorem 5.3 (Uniqueness). Let f be a real function satisfying the following two condi-
tions:
1. f(x, y) is continuous at all points (x, y) in the closed rectangle
R : {(x, y) : |x − x0| ≤ a, |y − y0| ≤ b}, (a, b > 0);
and hence bounded in R; that is, there is a number K such that
|f(x, y)| ≤ K for all (x, y) ∈ R.
2. f satisfies a Lipschitz condition (with respect to y) in R.
Then the initial value problem (5.1) has at most one solution y = y(x). Thus, by Existence
Theorem, the problem has precisely one solution. This solution exists at least for all x in
the subinterval |x − x0| ≤ β, where
β = min a,
b
K
.
Corollary 5.4. Let f and its partial derivative
∂f
∂y
be continuous for all (x, y) in the
closed rectangle
R : {(x, y) : |x − x0| < a, |y − y0| < b}, (a, b > 0),
and hence bounded, say
|f(x, y)| ≤ K, and
∂f(x, y)
∂y
≤ M
for all (x, y) ∈ R. Then the initial value problem (5.1) has one and only one solution y(x).
This solution exists at least for all x in the subinterval |x − x0| ≤ β, where
β = min a,
b
K
.
42. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 42
Proof. Follows from Theorems 5.1 and 5.3.
Remark 5.3. The existence and uniqueness theorems stated above are local in nature
since the interval, |x − x0| ≤ β, where solution exists may be smaller than the original
interval, |x − x0| ≤ a, where f(x, y) is defined. However, in some cases, this restrictions
can be removed. For instance, we have the following theorem.
Theorem 5.5. Let f(x, y) be a continuous function that satisfies a Lipschitz condition
|f(x, y1) − f(x, y2)| ≤ K|y1 − y2|
on a strip defined by a ≤ x ≤ b and −∞ < y < ∞. If (x0, y0) is any point of the strip,
then the initial value problem
y = f(x, y) y(x0) = y0
has one and only one solution y = y(x) on the interval a ≤ x ≤ b.
Example 5.7. Consider the IVP
dy
dx
+ p(x)y = r(x)
y(x0) = y0.
(5.9)
where p(x) and r(x) are defined and continuous in the interval a ≤ x ≤ b. Here f(x, y) =
−p(x)y + r(x). Let L = max{|p(x)| : a ≤ x ≤ b}. Then
|f(x, y1) − f(x, y2)| = | − p(x)(y1 − y2)| ≤ L|y1 − y2|.
Thus, f satisfies Lipschitz condition w.r.t y in the infinite vertical strip a ≤ x ≤ b and
−∞ < y < ∞. Therefore, the IVP (5.9) has a unique solution in the original interval
a ≤ x ≤ b.
Remark 5.4. Though the theorems are stated in terms of interior point x0, the point x0
could be left/right end point.
Example 5.8. Consider the initial-value problem
dy
dx
= x2
+ y2
, y(1) = 3.
Here f(x, y) = x2 + y2, and
∂f
∂y
= 2y. Both of the functions f and
∂f
∂y
are continuous and
bounded in every domain
R : {(x, y) : |x − 1| < a, |y − 3| < b}, (a, b > 0).
Thus all hypotheses of the Existence and Uniqueness Theorem are satisfied, and hence
there is a unique solution φ of the above IVP.
43. 5 PICARD’S EXISTENCE AND UNIQUENESS THEOREM 43
Example 5.9. Consider the ODE
dy
dx
= 1 + y2
, y(0) = 0.
Consider the rectangle
S = {(x, y) : |x| ≤ 100, |y| ≤ 1}.
Clearly f and
∂y
∂x
are continuous in S. Hence, there exists a solution in the neighbourhood
of (0, 0). Now f(x, y) = 1 + y2, and |f(x, y)| ≤ 2 for all (x, y) ∈ S. Therefore, β =
min 100, 1
2 , and hence the theorems guarantee existence of unique solution in |x| ≤ 1
2,
which is much smaller than the original interval |x| ≤ 100.
Since, the above equation is separable, we can solve it exactly and find y(x) = tan(x).
This solution is valid only in −π
2 < x < π
2 which is also much smaller than |x| ≤ 100, but
nevertheless bigger than that predicted by the existence and uniqueness theorems.
44. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS44
6 Basic Theory of Linear Homogeneous Differential Equa-
tions
Definition 6.1 (Linear ODE). A linear ordinary differential equation of order n, in the
dependent variable y and the independent variable x, is an equation that is in, or can be
expressed in, the form
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x), (6.1)
where a0 is not identically zero (that is, a0(x) = 0 for some x).
Definition 6.2 (Linear Homogeneous ODE). Consider a linear ordinary differential equa-
tion
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x). (6.2)
If the function F(x) is identically zero (that is, F(x) = 0 for all x considered), then (6.2)
is called a linear homogeneous ODE. If F(x) is not identically zero, then (6.2) is called
linear nonhomogeneous ODE.
Example 6.1. The equations
d2y
dx2
+ 3x
dy
dx
+ x3
y = 0, and
d2y
dx2
+ 3x
dy
dx
+ x3
y = ex
are
respectively homogeneous and nonhomogeneous linear ODE.
Theorem 6.1 (Existence and Uniqueness Theorem). Consider the nth-order linear dif-
ferential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x), (6.3)
where a0, a1, . . . , an and F are continuous real functions on a real interval a ≤ x ≤ b and
a0(x) = 0 for all x on a ≤ x ≤ b. Consider an IVP consisting of the Equation (6.3) along
with the supplementary conditions
y(x0) = c0, y (x0) = c1, . . . , y(n−1)
(x0) = cn−1, x0 ∈ [a, b].
Then there exists a unique solution f of this IVP and this solution is defined over the
entire interval a ≤ x ≤ b.
Remark 6.1. For the second-order linear differential equation,
a0(x)
d2y
dx2
+ a1(x)
dy
dx
+ a2(x)y = F(x), (6.4)
Theorem 6.1 takes the following form:
Let a0, a1, a2 and F be continuous real functions on a real interval a ≤ x ≤ b and a0(x) = 0
for all x on this interval. Then the IVP
a0(x)
d2y
dx2
+ a1(x)
dy
dx
+ a2(x)y = F(x)
y(x0) = c0, y (x0) = c1, x0 ∈ [a, b]
has a unique solution f and this solution is defined over the entire interval a ≤ x ≤ b.
45. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS45
Example 6.2. Consider the initial-value problem
d2y
dx2
+ 3x
dy
dx
+ x3
y = ex
,
y(1) = 2, y (1) = −5
Note that 1, 3x, x3 and ex are all continuous for all values of x, −∞ < x < ∞. The
point x0 here is the point 1, and the real numbers c0 and c1 are 2 and −5, respectively.
Thus Theorem 6.1 assures us that a solution of the given problem exists, is unique, and is
defined for all x, −∞ < x < ∞.
Problem 6.1. Let f be a solution of the nth-order homogeneous linear differential equa-
tion
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (6.5)
such that
f(x0) = 0, f (x0) = 0, . . . , f(n−1)
(x0) = 0,
where x0 is a point of the interval a ≤ x ≤ b in which the coefficients a0, a1, . . . , an are all
continuous and a0(x) = 0 for all x ∈ [a, b]. Then f(x) = 0 for all x on a ≤ x ≤ b.
Solution. Note that the function g defined by g(x) = 0 for all x ∈ [a, b] is a solution of the
given IVP.
We now consider the fundamental results concerning the linear homogeneous equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (6.6)
where a0 is not identically zero.
6.1 Linear Combination of Solutions
Definition 6.3 (Linear Combination). If f1, f2, . . . , fm are given functions, and c1, c2, . . . , cm
are m constants, then the expression
c1f1 + c2f2 + . . . + cmfm
is called a linear combination of f1, f2, . . . , fm.
Theorem 6.2 (Basic Theorem On Linear Homogeneous ODEs). Any linear combination
of solutions of the homogeneous linear differential equation (6.6) is also a solution of (6.6).
Proof. Trivial
Remark 6.2. For the second-order linear homogeneous equation,
a0(x)
d2y
dx2
+ a1(x)
dy
dx
+ a2(x)y = 0, (6.7)
Theorem 6.2 states that if f1 and f2 are two solutions of (6.7), then, c1f1 + c2f2 is also a
solution of (6.7), where c1 and c2 are any two constants.
Example 6.3. We can verify that sin x and cos x are solutions of
d2y
dx2
+ y = 0
and hence, by Theorem 6.2, 5 sin x + 6 cos x is also a solution of it.
46. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS46
6.2 Linear Independence of Solutions
Definition 6.4 (Linear Dependence and Independence). Let f1, f2, . . . , fm be m functions
defined in an interval I.
1. The m functions f1, f2, . . . , fm are called linearly dependent on a ≤ x ≤ b if there
exist constants c1, c2, . . . , cm not all zero, such that
c1f1(x) + c2f2(x) + . . . + cmfm(x) = 0
for all x ∈ [a, b].
2. The m functions f1, f2, . . . , fm are called linearly independent on the interval a ≤
x ≤ b if they are not linearly dependent there. That is, the functions f1, f2, . . . , fm
are linearly independent on a ≤ x ≤ b if the relation
c1f1(x) + c2f2(x) + . . . + cmfm(x) = 0
for all x ∈ [a, b] implies that
c1 = c2 = · · · = cm = 0.
Example 6.4.
1. sin x, cos x are linearly independent on [−π, π].
2. x|x|, x2 are linearly independent on [−1, 1].
3. x|x|, x2 are linearly dependent on [0, 1].
Definition 6.5 (Wronskian). Let f1, f2, . . . , fn be n real functions each of which has an
(n − 1)st derivative on a real interval a ≤ x ≤ b. The determinant
W(f1, f2, . . . , fn) =
f1 f2 · · · fn
f1 f2 · · · fn
...
...
...
f
(n−1)
1 f
(n−1)
2 · · · f
(n−1)
n
in which primes denote derivatives, is called the Wronskian of these n functions. We
observe that W(f1, f2, . . . , fn) is itself a real function defined on a ≤ x ≤ b. Its value at x
is denoted by W(f1, f2, . . . , fn)(x) or by W[f1(x), f2(x), . . . , fn(x)].
Remark 6.3. For two differentiable functions f and g defined on [a, b], Wronskian W(f, g)
is obtained as the function
W(f, g)(x) = f(x)g (x) − g(x)f (x), x ∈ [a, b].
Theorem 6.3 (Wronskian and Linear Independence). Let f1, f2, . . . , fn be n real functions
each of which has an (n − 1)st derivative on a real interval a ≤ x ≤ b.
47. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS47
1. If f1, f2, . . . , fn are linearly dependent on [a, b], then
W(f1, f2, . . . , fn)(x) = 0 for all x ∈ [a, b]
2. Let f1, f2, . . . , fn be the solutions of the linear homogeneous equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (6.8)
where a0, a1, . . . , an are continuous real functions on the real interval a ≤ x ≤ b and
a0(x) = 0 for all x on a ≤ x ≤ b. If
W(f1, f2, . . . , fn)(x0) = 0
for some x0 ∈ [a, b], then f1, f2, . . . , fn are linearly dependent on [a, b].
3. The Wronskian of n solutions f1, f2, . . . , fn of (6.8) is either identically zero on
a ≤ x ≤ b or else is never zero on a ≤ x ≤ b.
Proof. (1): Let us choose an arbitrary d ∈ [a, b], and we prove W(f1, f2, . . . , fn)(d) = 0.
Since f1, f2, . . . , fn are linearly dependent on [a, b], there exists c1, c2, . . . , cn not all zero
such that
c1f1(x) + c2f2(x) + . . . + cnfn(x) = 0 for all x ∈ [a, b]. (6.9)
From (6.9), we obtain for all x ∈ [a, b]
c1f1(x) + c2f2(x) + . . . + cnfn(x) = 0
c1f1 (x) + c2f2 (x) + . . . + cnfn(x) = 0
· · · · · · · · · · · · · · · · · · · · ·
c1fn−1
1 (x) + c2fn−1
2 (x) + . . . + cnfn−1
n (x) = 0
Thus, in particular for the point d ∈ [a, b], we obtain
c1f1(d) + c2f2(d) + . . . + cnfn(d) = 0
c1f1(d) + c2f2(d) + . . . + cnfn(d) = 0
c1f1 (d) + c2f2 (d) + . . . + cnfn(d) = 0
· · · · · · · · · · · · · · · · · ·
c1fn−1
1 (d) + c2fn−1
2 (d) + . . . + cnfn−1
n (d) = 0
(6.10)
Note that (6.10) is a linear homogeneous system of n equations in n unknowns c1, c2, . . . , cn.
Since this system has a nontrivial solution (as not all of c1, c2, . . . , cn are zero), the deter-
minant
f1(d) f2(d) · · · fn(d)
f1(d) f2(d) · · · fn(d)
...
...
...
f
(n−1)
1 (d) f
(n−1)
2 (d) · · · f
(n−1)
n (d)
= 0,
and hence W(f1, f2, . . . , fn)(d) = 0.
48. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS48
(2): Consider the following linear homogeneous system of n equations in n unknowns
c1, c2, . . . , cn
c1f1(x0) + c2f2(x0) + . . . + cnfn(x0) = 0
c1f1(x0) + c2f2(x0) + . . . + cnfn(x0) = 0
c1f1 (x0) + c2f2 (x0) + . . . + cnfn(x0) = 0
· · · · · · · · · · · · · · · · · ·
c1fn−1
1 (x0) + c2fn−1
2 (x0) + . . . + cnfn−1
n (x0) = 0
(6.11)
Now the determinant of the system (6.11) is the Wronskian W(f1, f2, . . . , fn) at x0, and
hence this determinant is zero. Therefore, we can find nontrivial solution for the system
(6.11), that is, there exists c1, c2, . . . , cn, not all zero, satisfying equations of the system
(6.11). Take these c1, c2, . . . , cn and form
y(x) = c1f1(x) + c2f2(x) + · · · + cnfn(x). (6.12)
Now using the fact that (6.12) is a solution of (6.8), and the c1, c2, . . . , cn satisfy the
equations of the system (6.11), we obtain (6.12) as a solution of the IVP
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0
y(x0) = 0, y (x0), . . . , y(n−1)
(x0) = 0.
(6.13)
But f(x) ≡ 0 is a solution of the IVP (6.13), and hence using the Uniqueness Theorem
6.1, we obtain y(x) ≡ 0, that is,
c1f1(x) + c2f2(x) + · · · + cnfn(x) = 0 for all x ∈ [a, b].
Since c1, c2, . . . , cn are not all zero, we obtain f1, f2, . . . , fn as linearly dependent on [a, b].
(3): Follows from (1) and ((2)).
Remark 6.4. Conclusion of Item (2) of Theorem 6.3 may not hold if f1, f2, . . . , fn are not
solutions of (6.8). That is, we can have linearly independent f1, f2, . . . , fn on [a, b] with
W(f1, f2, . . . , fn)(x0) = 0 for some x0 ∈ [a, b]. For instance, the functions f(x) = x, and
g(x) = sin x are linearly independent on [−2π, 2π] (use Item (1) of Theorem 6.3 at x = π
to prove it), but W(f, g)(0) = 0.
As a direct consequence of Theorem 6.3, we obtain
Corollary 6.4. Let f1, f2, . . . , fn be the solutions of the linear homogeneous equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0,
where a0, a1, . . . , an are continuous real functions on the real interval a ≤ x ≤ b and
a0(x) = 0 for all x on a ≤ x ≤ b. Then the followings are equivalent.
1. f1, f2, . . . , fn are linearly dependent on [a, b].
49. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS49
2. W(f1, f2, . . . , fn)(x) = 0 for all x ∈ [a, b].
3. W(f1, f2, . . . , fn)(x) = 0 for some x ∈ [a, b].
Example 6.5. The solutions sin x and cos x of
d2y
dx2
+ y = 0
are linearly independent on every real interval as
W(sin x, cos x) =
sin x cos x
cos x − sin x
= − sin2
x − cos2
x = −1 = 0
for all real x.
Problem 6.2. Consider the linear homogeneous equation
x3 d3y
dx3
− 4x2 d2y
dx2
= 8x
dy
dx
− 8y = 0.
1. Verify that x, x2 and x4 are all solutions of this equation.
2. Show that W(x, x2, x4) = 0 at x = 0, but W(x, x2, x4) = 0 at x = 1.
3. Show that x, x2 and x4 are linearly independent on [0, 1].
4. Explain why the Items 1-3 does not contradict Corollary 6.4 in the interval [0, 1].
6.3 Fundamental Set/Basis of Solutions and General Solution
Theorem 6.5 (Existence of Linearly Independent Solutions). Consider the nth-order
homogeneous linear differential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (6.14)
where a0, a1, . . . , an are continuous real functions on a real interval a ≤ x ≤ b and a0(x) =
0 for all x on a ≤ x ≤ b. Then (6.14) always possesses n solutions that are linearly
independent on [a, b].
Proof. Let x0 ∈ [a, b]. Consider the IVP consisting of the equation (6.14) and the supple-
mentary conditions y(x0) = 1, y (x0) = 0, . . . , y(n−1)(x0) = 0. Then by the existence and
uniqueness theorem 6.1, this IVP has a unique solution f0 defined on [a, b]. Similarly, for
each integer i, 0 ≤ i ≤ n−1, the IVP consisting of the equation (6.14) and the supplemen-
tary conditions y(x0) = 0, y (x0) = 0, . . . , y(i) = 1, . . . , y(n−1)(x0) = 0 has unique solution
fi defined on [a, b]. Now it remains to show that f0, f1, . . . , fn−1 are linearly independent
50. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS50
on [a, b]. But this follows from Theorem 6.3, and the fact that
W(f0, f1, . . . , fn−1)(x0) =
f0(x0) f1(x0) · · · fn−1(x0)
f0(x0) f1(x0) · · · fn−1(x0)
...
...
...
f
(n−1)
0 (x0) f
(n−1)
1 (x0) · · · fn−1
(n−1)(x0)
=
1 0 · · · 0
0 1 · · · 0
...
...
...
0 0 · · · 1
= 1.
Theorem 6.6. Let f1, f2, . . . , fn are n linearly independent solutions defined on [a, b] of
the nth-order homogeneous linear differential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (6.15)
where a0, a1, . . . , an are continuous real functions on a real interval a ≤ x ≤ b and a0(x) =
0 for all x on a ≤ x ≤ b. Then every solution f defined on [a, b] of (6.15) can be expressed
as a linear combination
f = c1f1 + c2f2 + · · · + cnfn
by proper choice of the constants c1, c2, . . . , cn.
Proof. Let f be any solution of (6.15) defined on [a, b], and let x0 ∈ [a, b]. Consider the
following linear system of n equations in n unknowns c1, c2, . . . , cn
c1f1(x0) + c2f2(x0) + . . . + cnfn(x0) = f(x0)
c1f1(x0) + c2f2(x0) + . . . + cnfn(x0) = f (x0)
c1f1 (x0) + c2f2 (x0) + . . . + cnfn(x0) = f (x0)
· · · · · · · · · · · · · · · · · ·
c1fn−1
1 (x0) + c2fn−1
2 (x0) + . . . + cnfn−1
n (x0) = fn−1
(x0)
(6.16)
Since f1, f2, . . . , fn are linearly independent solution,
W(f1, f2, . . . , fn)(x0) =
f1(x0) f2(x0) · · · fn(x0)
f1(x0) f1(x0) · · · fn)(x0)
...
...
...
f
(n−1)
1 (x0) f
(n−1)
2 (x0) · · · f
(n−1)
n (x0)
= 0.
Hence the system (6.16) has unique solution, say, c1 = d1, c2 = d2, · · · , cn = dn. Let
ξ(x) = d1f1(x) + d2f2(x) + . . . + dnfn(x), x ∈ [a, b].
51. 6 BASIC THEORY OF LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS51
Now consider the IVP
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0,
y(x0) = f(x0), y (x0) = f (x0), . . . , y(n−1)
(x0) = f(n−1)
(x0)
(6.17)
Note that both f and ξ are solutions of this IVP, and are defined on [a, b]. Therefore, by
Theorem 6.1, we must have ξ(x) = f(x) for all x ∈ [a, b]. That is,
f(x) = d1f1(x) + d2f2(x) + . . . + dnfn(x), x ∈ [a, b].
Definition 6.6 (Fundamental Set of Solution and General Solution). If f1, f2, . . . , fn are
n linearly independent solutions of the nth-order homogeneous linear differential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (6.18)
on a ≤ x ≤ b, then the set {f1, f2, . . . , fn} is called a fundamental set/Basis of solutions
of (6.18) and the function f defined by
f(x) = c1f1(x) + c2f2(x) + . . . + cnfn(x), a ≤ x ≤ b,
where c1, c2, . . . , cn are arbitrary constants, is called a general solution of (6.18) on a ≤
x ≤ b.
Remark 6.5. Let f(x) = c1f1(x) + c2f2(x) + . . . + cnfn(x) be a general solution of the
the nth-order homogeneous linear differential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (6.19)
where a0, a1, . . . , an are continuous real functions on the real interval a ≤ x ≤ b and
a0(x) = 0 for all x on a ≤ x ≤ b. Then, from Theorem 6.6, it follows that every solution of
(6.19) is obtained from the general solution f by chossing suitable value for the constants
c1, c2, . . . , cn.
Theorem 6.7 (Existence of General Solution). The nth-order homogeneous linear differ-
ential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0,
where a0, a1, . . . , an are continuous real functions on the real interval a ≤ x ≤ b and
a0(x) = 0 for all x on a ≤ x ≤ b, has a fundamental set/basis of solutions, and hence a
general solution.
Example 6.6. One can show that sin x and cos x are two linearly independent solutions
of
d2y
dx2
+ y = 0 (6.20)
for all x, −∞ < x < ∞. Therefore, general solution of (6.20) is given by
y(x) = c1 sin x + c2 cos x. (6.21)
Every solution of (6.20) is obtained from (6.21) by suitable choice of the constants c1, c2.
52. 7 HOMOGENEOUS LINEAR EQUATION WITH CONSTANT COEFFICIENTS 52
7 Homogeneous Linear Equation with Constant Coefficients
7.1 Homogeneous 2nd Order Linear Equation with Constant Coeffi-
cients
Consider differential equations of the form
a
d2y
dx2
+ b
dy
dx
+ cy = 0 (7.1)
where a, b, c are real constants, and a = 0. Such an equation is called homogeneous 2nd
order linear equation with constant coefficients.
Let us assume that equation (7.1) has a solution of the form y = emx, where m is a
constant to be determined. If we substitute this solution in (7.1), we obtain
am2
+ bm + c = 0 ( emx
= 0). (7.2)
Equation (7.2) is called characteristic equation for (7.1). Two independent solutions (i.e.,
basis) of such an equation depend on the roots of quadratic equation (7.2).
Theorem 7.1. Consider the 2nd order homogeneous linear differential equation (7.1) with
constant coefficients. Let m1, m2 be the roots of the characteristic equation (7.2).
• Real and distinct roots: If m1 and m2 are real and distinct, then two linearly
independent solutions of (7.1) are em1x and em2x. Thus the general solution of (7.1)
is
y = C1em1x
+ C2em2x
.
• Real and equal roots: If m1 and m2 are real and m1 = m2 = m, then two linearly
independent solutions of (7.1) are emx and xemx. Thus the general solution of (7.1)
is
y = emx
(C1 + C2x).
• Complex roots: If m1 and m2 are complex conjugate, say m1 = α + iβ, and
m2 = α − iβ, then the general solution of (7.1) is
y = eαx
(C1 cos βx + C2 sin βx).
Problem 7.1. Solve
d2y
dx2
−
dy
dx
= 0.
Solution. The characteristic equation is m2 − m = 0, and it has roots m = 0, 1. The
general solution is y = C1 + C2ex.
Problem 7.2. Solve
d2y
dx2
− 2
dy
dx
+ y = 0.
Solution. The characteristic equation is m2 − 2m + 1 = 0, and it has roots m = 1, 1. The
general solution is y = ex(C1 + C2x).
Problem 7.3. Solve
d2y
dx2
− 2
dy
dx
+ 5y = 0.
Solution. The characteristic equation is m2 − 2m + 5 = 0, and it has roots m = 1 ± 2i.
The general solution is y = ex(C1 cos 2x + C2 sin 2x).
53. 7 HOMOGENEOUS LINEAR EQUATION WITH CONSTANT COEFFICIENTS 53
7.2 Homogeneous nth Order Linear Equation with Constant Coefficients
A nth order homogeneous linear equation with constant coefficients is of the form
a0
dny
dxn
+ a1
dn−1y
dxn−1
+ · · · + an−1
dy
dx
+ any = 0, (7.3)
where all ai’s are real constants and a0 = 0. As in the case of 2nd order linear equation,
the linearly independent solutions of (7.3) depends on the characteristic equation
a0mn
+ a1mn−1
+ · · · + an−1m + an = 0 (7.4)
This equation has n roots. As in the case of 2nd order equation, the following can be
proved.
Theorem 7.2. The fundamental set of solutions B for (7.3) is obtained using the following
two rules:
Rule 1: If a root m of (7.4) is real and repeated k times, then this root gives k number
of linearly independent solutions emx, xemx, x2emx, . . . , xk−1emx to B.
Rule 2: If the roots m = α ± iβ of (7.4) is complex conjugate (β = 0) and are repeated
k times each, then they contribute 2k number of linearly independent solutions eαx cos βx,
eαx sin βx, xeαx cos βx, xeαx sin βx, x2eαx cos βx, x2eαx sin βx, . . . , xk−1eαx cos βx, and
xk−1eαx sin βx to B.
Problem 7.4. Solve
d6y
dx6
+ 8
d5y
dx5
+ 25
d4y
dx4
+ 32
d3y
dx3
−
d2y
dx2
− 40
dy
dx
− 25y = 0.
Solution. The characteristic equation is
m6
+ 8m5
+ 25m4
+ 32m3
− m2
− 40m − 25 = 0
and it has roots m = 1, −1, −2 ± i, −2 ± i.
Solutions corresponding to the roots 1 and −1 are ex, and e−x respectively, and solutions
corresponding to −2 ± i are e−2x cos x, e−2x sin x, xe−2x cos x, xe−2x sin x. Therefore, six
linearly independent solutions are ex, e−x, e−2x cos x, e−2x sin x, xe−2x cos x, xe−2x sin x,
and hence the general solution is
y = C1ex
+ C2e−x
+ e−2x
(C3 + C4x) cos x + (C5 + C6x) sin x .
Problem 7.5. Solve
d4y
dx4
− 4
d3y
dx3
+ 14
d2y
dx2
− 20
dy
dx
+ 25y = 0.
Solution. The characteristic equation is
m4
− 4m3
+ 14m2
− 20m + 25 = 0
and it has roots m = 1 + 2i, 1 − 2i, 1 + 2i, 1 − 2i.
Therefore, four linearly independent solutions are
ex
cos 2x, ex
sin 2x, xex
cos 2x, xex
sin 2x.
The general solution is
y = ex
[(C1 + C2x) sin 2x + (C3 + C4x) cos 2x].
54. 8 BASIC THEORY OF LINEAR NONHOMOGENEOUS DIFFERENTIAL EQUATIONS54
8 Basic Theory of Linear Nonhomogeneous Differential Equa-
tions
We first recall the following existence and uniqueness theorem.
Theorem 8.1 (Existence and Uniqueness Theorem). Consider the nth-order linear dif-
ferential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x), (8.1)
where a0, a1, . . . , an and F are continuous real functions on a real interval a ≤ x ≤ b and
a0(x) = 0 for all x on a ≤ x ≤ b. Consider an IVP consisting of the Equation (8.1) along
with the supplementary conditions
y(x0) = c0, y (x0) = c1, . . . , y(n−1)
(x0) = cn−1, x0 ∈ [a, b].
Then there exists a unique solution f of this IVP and this solution is defined over the
entire interval a ≤ x ≤ b.
Theorem 8.2 (Relations of Solutions of Linear Nonhomogeneous ODE and its Homoge-
neous counterpart). Consider the nonhomogeneous linear ODE
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x), (8.2)
and homogeneous part of (8.2)
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0. (8.3)
Let I be an interval.
1. The sum of a solution y of (8.2) on I and a solution ˜y of (8.3) on I is a solution of
(8.2) on I.
2. The difference of two solutions of (8.2) on I is a solution of (8.3) on I.
Theorem 8.3. Consider the nth-order nonhomogeneous linear differential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x), (8.4)
and the corresponding homogeneous equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0, (8.5)
where a0, a1, . . . , an and F are continuous real functions on a real interval a ≤ x ≤ b and
a0(x) = 0 for all x on a ≤ x ≤ b. Let yp, defined on [a, b], be a given solution involving
no arbitrary constants of (8.4), and
yc = c1y1 + . . . + cnyn
55. 8 BASIC THEORY OF LINEAR NONHOMOGENEOUS DIFFERENTIAL EQUATIONS55
be the general solution of the homogeneous equation (8.5) defined on [a, b]. Then every
solution φ of the nonhomogeneous equation (8.4) can be expressed in the form
yc + yp,
for suitable choice of the n arbitrary constants c1, c2, . . . , cn.
Proof. Let φ be a solution of (8.4). Then, by Theorem 8.2, φ−yp is a solution of (8.5). Since
yc = c1y1 + . . . + cnyn is the general solution of (8.4), Remark 6.5 guarantees the suitable
choices of the n arbitrary constants c1, c2, . . . , cn such that φ − yp = c1y1 + . . . + cnyn.
Thus
φ = yp + c1y1 + . . . + cnyn,
for suitable choice of the arbitrary constants c1, c2, . . . , cn.
Definition 8.1 (General Solution). Consider the nth-order (nonhomogeneous) linear dif-
ferential equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F(x), (8.6)
and the corresponding homogeneous equation
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = 0. (8.7)
1. The general solution of (8.7) is called the complementary function of Equation
(8.6). We shall denote this by yc.
2. Any particular solution of (8.6) involving no arbitrary constants is called a partic-
ular integral of (8.6). We shall denote this by yp.
3. The solution yc + yp of (8.6), where yc is the complementary function and yp is a
particular integral of (8.6), is called the general solution of (8.6).
Example 8.1. Consider the differential equation
d2y
dx2
+ y = x.
The complementary function is the general solution
yc = c1 sin x + c2 cos x
of the corresponding homogeneous equation
d2y
dx2
+ y = 0.
A particular integral is given by
yp = x.
Thus the general solution of the given equation may be written
y = yc + yp = c1 sin x + c2 cos x + x.
56. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 56
9 Solution Methods for Nonhomogeneous Linear ODEs
We point out that if the nonhomogeneous member F(x) of the linear differential equation
(8.6) is expressed as a linear combination of two or more functions, then the following
theorem may often be used to advantage in finding a particular integral.
Theorem 9.1. Let f1 be a particular integral of
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F1(x),
and f2 be a particular integral of
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = F2(x).
Then k1f1 + k2f2 is a particular integral of
a0(x)
dny
dxn
+ a1(x)
dn−1y
dxn−1
+ · · · + an−1(x)
dy
dx
+ an(x)y = k1F1 + k2F2(x),
where k1 and k2 are constants.
9.1 The Method of Undermined Coefficients
Definition 9.1. We shall call a function a UC function if it is either (1) a function defined
by one of the following:
(i) xn, where n is a positive integer or zero,
(ii) eax, where a is a non zero constant,
(iiii) sin bx, where b is a non zero constant,
(iv) cos bx, where b is a non zero constant,
or (2) a function defined as a finite product of two or more functions of these four types.
Example 9.1. A few examples of UC functions are xex, e2xex, e5xx2 sin 2x, sin x cos x, sin2
3x.
Definition 9.2. Consider a UC function f. The set of functions consisting of f itself and
all linearly independent UC functions of which the successive derivatives of f are either
constant multiples or linear combinations will be called the UC set of f.
Remark 9.1. Suppose h is a UC function defined as the product fg of two basic UC
functions f and g. Then the UC set of the product function h is the set of all the products
obtained by multiplying the various members of the UC set of f by the various members
of the UC set of g.
Table 1 lists some of the UC functions and their UC sets.
57. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 57
UC function UC set
1 xn {xn, xn−1, xn−2, . . . , x, 1}
2 eax {eax}
3 sin bx, or cos bx {sin bx, cos bx}
4 xneax {xneax, xn−1eax, xn−2eax, . . . , x, 1}eax
5 xn sin bx, or xn cos bx {xn sin bx, xn−1 sin bx, xn−2 sin bx, . . . , x sin bx,
sin bx, xn cos bx, xn−1 cos bx, xn−2 cos bx, . . . ,
x cos bx, cos bx}
6 eax sin bx, or eax cos bx {eax sin bx, eax cos bx}
7 xneax sin bx, or xneax cos bx {eaxxn sin bx, eaxxn−1 sin bx, eaxxn−2 sin bx, . . . ,
eaxx sin bx, eax sin bx, eaxxn cos bx, eaxxn−1 cos bx,
eaxxn−2 cos bx, . . . , eaxx cos bx, eax cos bx}
Table 1: Some UC functions and corresponding UC sets
Remark 9.2. The method of undetermined coefficients works for the following nonhomo-
geneous linear equation:
a0
dny
dxn
+ a1
dn−1y
dxn−1
+ · · · + an−1
dy
dx
+ any = F(x) (9.1)
where all ais are real constants and F(x) is a finite linear combination of UC functions. In
fact, due to Theorem 9.1, we need to consider only the case when F(x) is an UC function.
Method 1
To determine a particular integral yp of the nonhomogeneous linear equation (9.1), where
F(x) is an UC function, we proceed as follows:
Step 1: Obtain the UC set S of the UC function F(x).
Step 2: Obtain the set N as follows:
• If none of the functions in S is a solution of the homogeneous equation
a0
dny
dxn
+ a1
dn−1y
dxn−1
+ · · · + an−1
dy
dx
+ any = 0, (9.2)
Then we take N := S.
58. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 58
• If S includes one or more functions which are solutions of the corresponding
homogeneous differential equation (9.2), then we multiply each member of S
by the lowest positive integral power of x, say xn, so that the resulting revised
set will contain no members that are solutions of (9.2). We take N to be this
revised set, so obtained.
After Step 2, suppose N is obtained as the set {f1, f2, . . . , fm}.
Step 3: Form a linear combination α1f1 + α2f2 + · · · + αmfm of elements of N with
unknown constant coefficients αi.
Step 4: Determine the unknown coefficients αi by substituting the linear combination
y = α1f1 + α2f2 + · · · + αmfm formed in Step 3 into the differential equation (9.1)
and demanding that it identically satisfy the differential equation.
Step 5: Substitute the values of αi’s determined in Step 4 in α1f1 + α2f2 + · · · + αmfm
to obtain a particular integral of (9.1).
Remark 9.3. Note that if y is a solution of the homogeneous part of the equation (9.1),
then it cannot be a particular integral of (9.1) (assuming F(x) is not identically zero).
This is why we need to have Step 2 in the above procedure.
Problem 9.1. Solve
d2y
dx2
− 2
dy
dx
+ y = sin x.
Solution.
Step 1: Computing derivatives of f(x) = sin x, we find
f (x) = cos x, f (x) = − sin x, . . .
The set of functions consisting of f itself and all linearly independent UC functions
of which the successive derivatives of f are either constant multiples or linear com-
binations is given by {sin x, cos x}. Thus the UC set of the UC function sin x is
obtained as S = {cos x, sin x}.
Step 2: Note that the functions in S, that is, sin x and cos x are not a solution of the
homogeneous equation
d2y
dx2
−2
dy
dx
+y = 0, and hence we take N = S = {sin x, cos x}.
Step 3: Consider the linear combination
yp = α1 sin x + α2 cos x. (9.3)
Step 4: Assuming yp = α1 sin x+α2 cos x to be a particular integral of the given equation,
and substituting it in the equation, we obtain for all x
−α1 sin x − α2 cos x − 2α1 cos x + 2α2 sin x + α1 sin x + α2 cos x = sin x
⇒ (2α2 − 1) sin x − 2α1 cos x = 0
⇒ α1 = 0, and α2 =
1
2
.
59. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 59
Step 5: Thus a particular integral of the given equation is obtained as
yp =
1
2
cos x.
Since the general solution of the homogeneous equation
d2y
dx2
− 2
dy
dx
+ y = 0 is
yc = (C1 + xC2)ex
,
the general solution of the given equation is obtained as
y = yc + yp
= (C1 + xC2)ex
+
1
2
cos x.
Remark 9.4. Consider the equation
d2y
dx2
− 2
dy
dx
+ y = 6xex
. We cannot use the above
method to solve it as 6xex is not an UC function. But we note that xex is an UC function,
and hence 6xex is a constant multiple of a UC function. Therefore, we can use Theorem
9.1 to a find a particular integral of it. In fact, we have the following method when
non-homogeneous part F(x) is a finite linear combination of UC functions.
Method 2
To determine a particular integral yp of the nonhomogeneous linear equation (9.1), where
F(x) is a finite linear combination of UC functions, we proceed as follows:
1. As F(x) is a finite linear combination of UC functions, F must be of the form
F(x) = c1F1(x) + c2F(x) + · · · + cmFm(x),
where Fi’s are UC functions.
2. Use the above presented method to find a particular integral of the equation
a0
dny
dxn
+ a1
dn−1y
dxn−1
+ · · · + an−1
dy
dx
+ any = Fi(x).
Let the particular integrals are fi, for 1 ≤ i ≤ m.
3. Then c1f1 + c2f2 + · · · cmfm is a particular integral of
a0
dny
dxn
+ a1
dn−1y
dxn−1
+ · · · + an−1
dy
dx
+ any = F(x).
Problem 9.2. Solve
d2y
dx2
− 2
dy
dx
+ y = 6xex
.
Solution. Note that 6xex is not a UC function, but xex is. So, we first find a particular
integral f of
d2y
dx2
− 2
dy
dx
+ y = xex
. Then, by Theorem 9.1, 6f will be the particular
integral of
d2y
dx2
− 2
dy
dx
+ y = 6xex
.
60. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 60
Step 1: Computing derivatives of f(x) = xex, we find
f (x) = xex
+ ex
, f (x) = xex
+ ex
, . . .
The set of functions consisting of f itself and all linearly independent UC functions
of which the successive derivatives of f are either constant multiples or linear combi-
nations is given by {xex, ex}. Thus the UC set of the UC function 6xex is obtained
as S = {xex, ex}.
Step 2: Note that ex, and xex are solutions of the homogeneous equation
d2y
dx2
− 2
dy
dx
+ y = 0. (9.4)
Moreover, x2(ex), and x2(xex) are not a solution of (9.4). Hence we take N =
{x2ex, x3ex}.
Step 3: Consider the linear combination
yp = α1x2
ex
+ α2x3
ex
.
Step 4: Assuming yp = α1x2ex + α2x3ex to be a particular integral of the equation
d2y
dx2
− 2
dy
dx
+ y = xex
, and substituting it in the equation, we obtain for all x
(6α2x + 2α1)ex
= xex
⇒ (6α2 − 1)x + 2α1 = 0
⇒ α1 = 0, and α2 =
1
6
.
Step 5: Thus a particular integral of the equation
d2y
dx2
− 2
dy
dx
+ y = xex
is
yp =
1
6
x3
ex
.
Hence a a particular integral of the given equation
d2y
dx2
− 2
dy
dx
+ y = 6xex
is
yp = x3
ex
.
Since the general solution of the homogeneous equation
d2y
dx2
− 2
dy
dx
+ y = 0 is
yc = (C1 + xC2)ex
,
the general solution of the given equation is obtained as
y = yc + yp
= (C1 + xC2)ex
+ x3
ex
.
61. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 61
Problem 9.3. Find a particular integral of
d2y
dx2
− 2
dy
dx
+ y = 12xex
+ 5 sin x.
Solution. Let y1, and y2 are particular integrals of the equation
d2y
dx2
− 2
dy
dx
+ y = 6xex
,
and
d2y
dx2
− 2
dy
dx
+ y = sin x respectively. Then, by Theorem 9.1, a particular integral of
the
d2y
dx2
− 2
dy
dx
+ y = 12xex
+ 5 sin x is obtained as
yp = 2y1 + 5y2.
From Problems 9.1 and 9.2, we have y1 = x3ex, and y2 = 1
2 cos x, and hence we obtain
yp = 2x3
ex
+
5
2
cos x.
Problem 9.4. Find a particular integral of
d2y
dx2
+ y = |x|, x ∈ (−1, 1).
Solution. We solve above equation first for 0 ≤ x < 1, and then for −1 < x ≤ 0. For
0 ≤ x < 1, above equation reduces to
d2y
dx2
+ y = x. (9.5)
The UC set for the UC function x is {1, x}. Note that 1 and x are not solution of the
homogeneous part of the equation (9.5) hence we consider yp = α1 + α2x as a particular
integral of (9.5). Substituting yp and its derivatives in the equation (9.5), we obtain for
all x ∈ [0, 1)
α1 + α2x = x
⇒ α1 = 0 and α2 = 1.
Thus a particular integral of (9.5) is
yp = x for x ∈ [0, 1). (9.6)
For 0 ≤ x < 1, above equation reduces to
d2y
dx2
+ y = −x.
Proceeding similarly as above, we obtain a particular integral in this case as
yp = −x for x ∈ (−1, 0]. (9.7)
Combining (9.6) and (9.7), we obtain a particular integral in the interval (−1, 1) as
yp = |x|.
62. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 62
9.2 The Method of Variation of Parameters
We concentrate on 2nd order equation but it can be applied to higher order ODE. This has
much more applicability than the method of undetermined coefficients. First, the ODE
need not be with constant coefficients. Second, the nonhomogeneos part F(x) can be a
much more general function.
Theorem 9.2. A particular solution yp to the linear ODE
d2y
dx2
+ P(x)
dy
dx
+ Q(x)y = F(x) (9.8)
is given by
yp = u(x)y1(x) + v(x)y2(x) (9.9)
where
1. y1 and y2 are two linearly independent solutions for the homogeneous counterpart
d2y
dx2
+ P(x)
dy
dx
+ Q(x)y = 0.
2. u(x) = −
y2F(x)
W(y1, y2)
dx
3. v(x) =
y1F(x)
W(y1, y2)
dx.
Proof. We assume that yp given by
yp(x) = u(x)y1(x) + v(x)y2(x) (9.10)
where u(x) and v(x) are unknown functions, is a particular integral of (9.8).
Differentiating (9.10), we obtain
yp(x) = u (x)y1(x) + v (x)y2(x) + u(x)y1(x) + v(x)y2(x).
At this point, we impose a condition that
u (x)y1(x) + v (x)y2(x) = 0. (9.11)
Under the condition (9.11), we obtain yp as
yp(x) = u(x)y1(x) + v(x)y2(x). (9.12)
Differentiating (9.12), we obtain
yp (x) = u(x)y1 (x) + v(x)y2 (x) + u (x)y1(x) + v (x)y2(x). (9.13)
As we have assume that yp is a particular integral of (9.12), we substitute yp, yp, and yp
into (9.8) (and using the fact that y1 and y2 are solution of the homogeneous part), we
get
u (x)y1(x) + v (x)y2(x) = F(x). (9.14)
63. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 63
We solve for u , v from (9.11) and (9.14) as follows (Cramer’s rule):
u = −
F(x)y2(x)
W(y1, y2)
, v =
F(x)y1(x)
W(y1, y2)
.
Integrating, we find
u = −
F(x)y2(x)
W(y1, y2)
dx, v =
F(x)y1(x)
W(y1, y2)
dx.
Substituting u and v in yp(x) = y1(x)u(x) + y2(x)v(x), we find the required form of yp
given in (9.9).
Note 2. We don’t write constant of integration in the expression of u and v, since these
can be absorbed with the constants of the general solution of the homogeneous part.
Note 3. The (leading) coefficient of
d2y
dx2
in (9.8) must be unity. If it is not unity, then
make it unity by dividing the ODE by the leading coefficient.
Problem 9.5. Solve
d2y
dx2
+ y = tan x
Solution. Two linearly independent solutions of the corresponding homogeneous equation
are y1(x) = cos x, and y2(x) = sin x. Hence,
yp = y1(x)u(x) + y2(x)v(x),
where
u(x) = −
y2F(x)
W(y1, y2)
dx, v(x) =
y1F(x)
W(y1, y2)
dx.
Here W(y1, y2) = 1, and F(x) = tan x. Therefore,
u(x) = − sin x tan x dx = − [sec x − cos x] dx = − ln | sec x + tan x| + sin x
v(x) = sin x dx = − cos x.
Thus, yp(x) = − cos x ln | sec x + tan x|, and hence the general solution is
y(x) = C1 cos x + C2 sin x − cos x ln | sec x + tan x|.
Problem 9.6. Solve x
d2y
dx2
− (1 + x)
dy
dx
+ y = x2
e2x
, x > 0
Solution. This is linear but the coefficients are not constants. Note that y1(x) = ex is a
solution (by inspection!) of
xy − (1 + x)y + y = 0. (9.15)
Using the reduction of order method, another independent solution of (9.15) is given by
y1
1
y2
1
e ( 1
x
+1) dx
= y1
1
y2
1
xex
dx = y1 xe−x
dx = −(x + 1).
64. 9 SOLUTION METHODS FOR NONHOMOGENEOUS LINEAR ODES 64
We can take y2 = 1 + x as the other linearly independent solution. Now we use the
method of variation of parameters to find the general solution of the given equation.
Given equation can be written as
d2y
dx2
−
1 + x
x
dy
dx
+
1
x
y = xe2x
. (9.16)
Hence,
yp = y1(x)u(x) + y2(x)v(x),
where
u(x) =
y2F(x)
W(y1, y2)
dx, v(x) =
y1F(x)
W(y1, y2)
dx.
Here W(y1, y2) = −xex, and F(x) = xe2x. Therefore,
u(x) = (1 + x)ex
dx = xex
v(x) = − e2x
dx = −
1
2
e2x
.
Thus, yp(x) = x−1
2 e2x, and hence the general solution is
y(x) = C1ex
+ C2(x + 1) +
x − 1
2
e2x
.
Problem 9.7. Solve
d2y
dx2
+ y = |x|, x ∈ (−1, 1).
Solution. Two linearly independent solutions of the corresponding homogeneous equation
are y1(x) = cos x, and y2(x) = sin x. Hence,
yp = y1(x)u(x) + y2(x)v(x),
where
u(x) = −
y2F(x)
W(y1, y2)
dx, v(x) =
y1F(x)
W(y1, y2)
dx.
Here W(y1, y2) = 1, and F(x) = |x|. Therefore,
u(x) = − |x| sin x dx =
− x sin x = x cos x − sin x if x ≥ 0
x sin x = −x cos x + sin x if x < 0
v(x) = |x| cos x dx =
x cos x = x sin x + cos x if x ≥ 0
− x sin x = −x sin x − cos x if x < 0
Thus, yp(x) = |x|. Therefore, general solution is
y(x) = C1 cos x + C2 sin x + |x|.