PD tmin change as function of Run tab setting in simulation mode (Phoenix)

In Phoenix, I set up a dynamic PK/PD model decribing the down regulation of an enzyme synthesis as function of PK of perpetrator. (Model in PML). I observed that the PD tmin (that’s enzyme indirect inihibition) changed upon change of simulation settings in Run tab in simulation mode: For fixed X max range = 600

Setting 1 #point = 200 >>> tmin = ~130 hours

Setting 2 # points = 300 >>> tmin = 92 hours

Of course all PK/PD parameters were unchanged.

What may be the reason. Granularity stiil enough to see the difference betwen 92 and 130 hours ?

Does someone faced similar situation ? Reason ?

I added supportive figure

Doc forum # points.docx (48.6 KB)

Can you share the phoenix project and or pml code and data used as input the more point you have the better resolution will be. Depending on your model a closed form solution for Tmin might be available or we can just simulate with the finest grid of time point possible to have the most accurate tmin.

Your plot can be more informative by adding the “points” and by overlaying both results.

We need the project to be able to help. Are you using population or individual simulation?

Serge

Just to inform anyone else who sees a similar issue; the solution in individual mode was to switch in the Run options tab; “Max ODE” from “Matrix Exponent” to “Stiff”. Then 200 points will run the same as 300 points even in Individual mode, and will show the minimum around 93.

The same is true if you use “auto-detect”.

We tried leaving it set to “Matrix Exponent” and adjusting the tolerances, but didn’t manage to fix this case. This is a differential equation model, so the simulation has to start at IVAR=0 and work forward, and I guess this method deviates a bit along the way. The plots make the change in the minimum look significant, but if you look at the scale of the y-axis, the whole curve is rather flat, so a slight deviation could move the minimum.

Simon

PS we will look to make some more explanation of this in the documentation going forward, in the mean time these comments from the developers should help;

If matrix exponent is actually used, it is usually the most accurate method by far.

However if you ask for matrix exponent, and it is not applicable, you get non-stiff (i.e. RungeKutta), which may have problems with a stiff system.

If matrix exponent is chosen, that does not mean matrix exponent will be used. It will only be used if possible.

It decides if it will be possible by seeing if every element of this matrix is constant between time points:

Aa A1 A2 …

Aa’ dAa’/dA dAa’/dA1 dAa’/dA2

A1’ dA1’/dA dA1’/dA1 dA1’/dA2

A2’ dA2’/dA dA2’/dA1 dA2’/dA2

In other words, the Jacobian of the differential equations.

It determines every one of those derivatives symbolically, and asks if they are constant between time points (like doses, observations, covariate changes).

If any one is not constant between time points, then it cannot use matrix exponent, so it defaults to another method instead, RK I think.

If this is a stiff system, the non-stiff solver (RK) will have difficulty. Either running the stiff solver or auto-detect should be accurate.