PDEs and Boundary Conditions
New methods have been implemented for solving partial differential equations with boundary condition (PDE and BC) problems.
1st order PDE with a single boundary condition (BC) that does not depend on the independent variables
Linear PDE on bounded domains with homogeneous boundary conditions
Cauchy problem for hyperbolic PDE with or without sources
PDEtools commands for working with PDE
The PDE & BC project, started five years ago implementing some of the basic methods found in textbooks to match arbitrary functions and constants to given PDE boundary conditions of different kinds. One frequent problem is that of a 1st order PDE that can be solved without boundary conditions in terms of an arbitrary function, and where a single boundary condition (BC) is given for the PDE unknown function, and this BC does not depend on the independent variables of the problem. The problem can be solved making simple, however, ingenious use of differential invariants to match the boundary condition.
The examples that can now be handled using this new method, although restricted in generality to "only one 1st order linear or nonlinear PDE and only one boundary condition for the unknown function itself", illustrate well how powerful it can be to use more advanced methods.
First consider a linear example, among the simplest one could imagine:
f⁡x,y,z⁢will now be displayed as⁢f
Now, input a boundary condition (BC) for the unknownf⁡x,y,z such that this BC does not depend on the independent variables x,y,z; this BC can, however, depend on arbitrary symbolic parameters. For instance:
bc ≔ fα+β,α−β,1=α⋅β
This kind of problem can now be solved in one step:
sol ≔ pdsolvepde,bc
To verify this result for correctness, use the pdetest. This also tests the solution against the boundary conditions.
pdetestsol, pde, bc
To obtain the solution (1.4), the PDE was first solved regardless of the boundary condition:
Next, the arbitrary function _F1⁡−x+y,−x+z was determined such that the boundary condition f⁡α+β,α−β,1=α⁢β is matched. Concretely, the mapping _F1 is what was determined. You can see this mapping reversing the solving process in two steps. Start by taking the difference between the general solution (1.6) and solution (1.4) that matches the boundary condition:
and isolate here _F1⁡−x+y,−x+z
So this is the value that _F1⁡−x+y,−x+z determined. To see the actual solving mapping _F1, that takes for arguments −x+y and −x+z and returns the right-hand side of (1.8), one can perform a change of variables introducing the two parameters τ__1 and τ__2 of the _F1 mapping:
PDEtools:-dchange,,u → simplifyu,size
So, the solving mapping _F1 is:
_F1 = unapplyrhs,τ__1,τ__2
Although this PDE and BC example looks simple, this solution (1.12) is not apparent, as is the way to get it just from the boundary condition fα+β,α−β,1=α⋅β and the solution (1.6).
Skipping the technical details, the key observation to compute a solving mapping is that: Given a 1st order PDE, where the unknown depends on k independent variables, if the boundary condition depends on k−1 arbitrary symbolic parameters α,β, one can always seek a "relationship between these k−1 parameters and the k−1 differential invariants that enter as arguments in the arbitrary function _F1 of the solution", and get the form of the mapping _F1 from this relationship and the BC. The method works in general. However, if, for instance, we change the BC (1.3), making its right-hand side a sum instead of a product,
bc ≔ fα+β,α−β,1=α+β
an interesting case happens when the boundary condition depends on less than k−1 parameters. For instance:
bc__1 ≔ subsβ=α,bc
sol__1 ≔ pdsolvepde,bc__1
As we see in this result, the additional difficulty represented by having few parameters got tackled by introducing an arbitrary constant _C1 (this is likely to evolve into something more general...)
Finally, consider a nonlinear example:
u⁡x,y⁢will now be displayed as⁢u
Here we have two independent variables, so for illustration purposes use a boundary condition that depends on only one arbitrary parameter.
bc ≔ u0, α = α
All looks OK, but we still have another problem: check the arbitrary function _F1 entering the general solution of PDE when tackled without any boundary condition:
Remove this RootOf to see the underlying algebraic expression:
So this is a PDE where the general solution is implicit, actually depending on an arbitrary function of the unknown ux,y. The code handles this problem in the same way, just that in cases like this there may be more than one solution. For this very particular BC (1.21), there are actually three solutions:
Verify these three solutions against the PDE and the boundary condition.
More PDE on bounded domains are solved in Maple 2016.
Example: The wave equation
pde ≔ diffux, t, t, t = c^2*diffux, t, x, x;
governs the displacements of a string whose length is l, so that 0≤x≤l, and t≥0.
bc ≔ u0, t = 0, ul, t = 0;
pdsolvepde,bc assuming l>0,x≤l;
Many of the improvements were made when using the Fourier method (with separation of variables by product and eigenfunction expansion). This method separates the PDE by product into two ODEs, so that we now need to solve two ODE boundary problems. One of these ODE boundary problems is a Sturm-Liouville problem (an eigenvalue problem), whose solution we represent using an infinite series.
Example: Consider the diffusion PDE and BC problem below, where 0≤ x≤l, t≥0.
pde ≔ diffux, t, t = k*diffux, t, x, x;
bc ≔ D1u0, t = 0, ul, t = 0, ux, 0 = fx;
pdsolvepde,bc assuming l > 0, x≤l;
Example: Consider Laplace's equation on a bounded circular domain. The PDE and BC problem below is in polar coordinates, with 0≤r≤R,0≤θ≤2 π.
pde ≔ diffdiffur,theta,r,r+1/r*diffur,theta,r+1/r^2*diffdiffur,theta,theta,theta = 0;
bc ≔ uR,theta = ftheta, ur,0 = ur,2*Pi, D2ur,0 = D2ur,2*Pi;
pdsolvepde,bc assuming r≤R, R>0,theta≥0,theta≤2⋅Pi;
Problems that include a source term can now be solved as well.
Example: Consider the following inhomogeneous PDE and BC problem, where the PDE includes a source term. To solve this, we use Duhamel's principle, namely that the solution to our inhomogeneous PDE problem can be found by solving the homogeneous version of the problem for a new variable wx,t,τ, such that ux,t=∫0twx,t−τ,τ ⅆτ, wx,0,τ=fx,τ. The domain is bounded: k>0,0≤ x≤Pi,t≥0.
pde ≔ diffux,t,t−k⋅diffux,t,x,x=fx,t;
bc ≔ u0,t=0,uPi,t=0,ux,0=0;
pdsolvepde,bc assuming k>0,x≥0,x≤Pi;
The method for solving the Cauchy problem for hyperbolic PDE (in unbounded domains) has been expanded to include different types of sources as well as functions in the initial conditions.
Here is an example without sources in the PDE, where now we can have different types of functions in the initial conditions.
pde ≔ diffux,t,t,t−4*diffux,t,x,x=0;
conds ≔ ux,0=exp−x^2,D2ux,0 = 0;
And an example with a source in the PDE:
pde ≔ diffux,t,t,t−4*diffux,t,x,x=fa;
conds ≔ ux,0=0,D2ux,0 = x2;
New PDEtools general-purpose commands and options for researching and solving PDEs have been implemented.
Convert a first-order PDE that contains the dependent variable explicitly into one that does not
The command, within the PDEtools package, is called ToMissingDependentVariable. It works by setting a new dependent variable which is a function of the independent variables of the problem, as well as of the old dependent variable. In the resulting PDE, the new dependent variable does not appear explicitly. For example, consider the following non-linear first order PDE which was not solved by pdsolve previously:
pde_with_m ≔ y⋅diffmx,y,x2−diffmx,y,y2+mx,y⋅diffmx,y,y;
Its solution is now found by pdsolve by first converting it into a PDE without a dependent variable. For this, it uses the command ToMissingDependentVariable:
pde_missing_m ≔ PDEtools:-ToMissingDependentVariablepde_with_m, mx, y, v;
That is, in order to solve pde_with_m, pdsolve creates and solves the above pde_missing_m, using a new function v(x,y,m), and then changes variables back to m(x,y) to give the solution:
m⁡x,y=ⅇ_C3⁢ln⁡y−2⁢_C2⁢x+4⁢_C22⁢y2+_C32−2⁢_C1−_C3⁢csgn⁡_C3⁢ln⁡2⁢_C3⁢csgn⁡_C3⁢4⁢_C22⁢y2+_C32+_C3y2⁢_C3&whereThere are no arbitrary functions
New option reverse, for computing the family of PDEs to which corresponds a given characteristic strip, in the command charstrip
Consider a PDE, its characteristic strip, and its solution. What if we could send just the characteristic strip back to PDEtools and ask for all the PDEs that correspond to it? Now we can and more often than not, it is a whole family of PDEs that correspond to any given characteristic strip. This allows users to search for families of PDEs whose solution may be found due to knowing the (solvable) characteristic strip of only one member of such a family.
In the example below, note the use of the option "simplifyusingpde = false"; this is necessary so that the characteristic strip does not get simplified using the given PDE, rendering the "reverse" option as nonfunctional.
Example: Here is a PDE for which we know the characteristic strip:
pde ≔ x*difffx, y, z, z^2−fx, y, z+y^2*difffx, y, z, y = 0;
PDEtools:-charstrippde, fx, y, z, simplifyusingpde = false;
And here is the family of PDEs that corresponds to that characteristic strip (note the arbitrary function _F1(x)):
PDEtools:-charstrip%, fx, y, z, reverse;