## Uniqueness and Error estimates via Kinetic Entropy Defect Measure

Here are a few thoughts from my preparation for the exam of Kinetic Equations at Universite de Savoie, France. The teachers of the course were Christian Bourdarias and Stephane Gerbi. I had to study an article of Benoit Perthame entitled *Uniqueness and Error estimates in First Order Quasilinear Conservation Laws via the Kinetic Entropy Defect Measure*.

This was a very nice article to study, since it used many things like distribution theory, measures and regularization. It showed the power of these tools, and motivated me to learn more about them.

As the title of the article says, a relatively new proof of the uniqueness of the solution for a scalar conservation law coupled with some entropy inequalities is given. The only known proof at the time the article was published was due to Kruzkov and was more intricate and difficult to understand than the one provided in the article. The estimates on the entropy defect measure, which will be introduced can yield some error term approximation for approximate equation, which in particular imply unicity at once.

Here are my detailed notes on the article. They are handwritten, but I think they are readable. Perthame-Uniqueness and Error Estimates

Consider solututions to first order quasilinear scalar conservation laws

endowed with a family of entropy inequalities

for all Lipschitz continuous and convex functions and

We call an entropy solution if satisfies and . It can be proved directly that if are entropy solutions then

From here unicity follows directly, if we consider an initial condition in at and we assume that at the solution is continuous. A proof can be found in Evans, *Partial Differential Equations* for the scalar care, but can easily be translated in the multidimensional case. The idea is that we consider a suitable choice of test functions in , for example, where , and

where the linear parts are chosen such that is continuous. In the following we choose such that

and for . Replace in and take first then to get

Taking then and using the continuity of in we obtain

where are the initial conditions in . Same initial condition clearly implies uniqueness.

To obtain we introduce the following kinetic formulation, which replaces with a single equation, with an extra variable the family of entropy inequalities . The kinetic formulation is

where the measure in the RHS is nonnegative, locally bounded, and is called the *entropy defect measure*. Also, the function is defined as

For a proof of the fact that the kinetic formulation is equivalent to you may consult the references in the article. A formal way to do this is to notice that since the distribution in the of is negative, and since every positive distribution is a measure, we can consider the of to be a measure multiplied with a negative scalar ( in our case), and is then just the derivative with respect to of , with replaced by Kruzkov’s entropies: . Conversely, if we know , then we multiply it by where is and convex, and then integrate with respect to to obtain

This readily implies since ( is convex) and the measure is nonnegative.

We can now present the first theorem in the article, which proves that holds and gives an estimate for the measure . In the following we denote

**Theorem (2.1)** If are entropy solutions such that then holds. Moreover, for any regularizing kernel we have

We can give an explicit formula for . If we denote by the entropy defect measure associated to then

Remark that if we choose the regularizing kernel to be positive, then from the second equality follows immediately, and the first limit helps prove the second equality. The proof of the theorem is made in three steps.

**Proof:** *Step 1* We can prove that

and

where denotes the entropy flux corresponding to Kruzkov’s entropy \mbox{}.

*Step 2* Consider a regularizing kernel . Convolving with , multiplying with and finally integrating with respect to we get

where in the RHS we have integrated by parts. Making in the above relation we get

where is the first limit in the theorem. By step 1 it easily follows that .

*Step 3* I will prezent a sketch of the proof, the details being similar to step 2. Do the following steps

where . The conclusion easily follows taking .

For the second result of the article, we consider the following approximate kinetic formulation

where is the -th order derivaive in of some error term . We also consider the error norm for

The main result of the second theorem of the article is the following error estimate.

**Theorem (3.1)** With the same condition as in the first theorem on and and we have

The proof of this theorem is divided again in several steps, where the first three steps can be done like the first steps in the first theorem (there are some complications due to the error term, of course, but the main idea is the same). In the last step, the estimate of proposision 2.2 from the article are used to deduce an estimate for the LHS of the conclusion in terms of . Optimizing by gives then the desired result.

Finally, in the end of the article an application is presented: an approximation for the diffusion equation.

with initial data. For this approximation we have the following estimate

as a simple application of theorem 3.1.

For more details, you can consult the article, and the references therein.