Biologic drug and difference from NONMEM

Dear colleague,

I have a study to evaluate IgG PK in monkeys at multiple dose levels (from 0.03 to 30 mg/kg). The IgG was administered via IV bolus on Day 1 and Day 8. Blood samples were collected every day until Day 14. I used a two compartment population model including both linear and target-mediated clearance (Michaelis Menten fashion). The running method is Naïve Pooled due to a small dataset (300 observations) and many parameters in the model.

1) Since the drug is given on Day 1 and Day 8, do you suggest to use the concentrations after Day 1 to estimate PK parameters? Or the concentration after Day 1 and Day 8 should be both included in the dataset to estimate PK parameters.

2) I added Michaelis menten pathway in parallel with linear clearance into the builtin two compartment IV model by editing the graphical. The dose statement is automatically come out like this. Is there anyone can help explain the dosepoint statement? If there are no urine data, what is the function of urinecpt statement (one exptra urinecpt was generated if I add one elimination compartment)?

dosepoint(A1, idosevar = A1Dose, infdosevar = A1InfDose, infratevar = A1InfRate).

urinecpt(A0 = (VMax * C / (C + Km)))

urinecpt(A3 = (CL * C))

  1. The EpsShrinkage is -0.001. I believe the shrinkage carries the same meaning with NONMEM. Why is the value negative?

Besides the questions related to above analysis, I have some additional questions related to Phoenix.

  1. In Nonmem, $OMEGA and $SIGMA are needed in the control stream for the inter-individual and residual unexplained variability. I didn’t see Phoenix has that function for inter-individual variability. If I use the proportional residual error model, there are two statements

[error(CEps = 1); observe(CObs = C * (1 + CEps))].

Here CEps is an initial absolute value (1 in this case), NOT variance like NONMEM (SIGMA estimate)?

  1. I check the core output and found there are condition number, shrinkage etc. I wonder does these parameters carry the same meaning like NONMEM. How people evaluate if the model is good or not from Phoenix (return code is one of the criteria?).

  2. Occasion 1: If population box? is cleared, the running method will automatically change to Naïve Pooled. Occasion 2: If I started with Naïve pooled method, and then clear the population box.

What will be the difference between these two occasions?

[font=‘times new roman’]1)[/font][font=‘times new roman’] Since the drug is given on Day 1 and Day 8, do you suggest to use the concentrations after Day 1 to estimate PK parameters? Or the concentration after Day 1 and Day 8 should be both included in the dataset to estimate PK parameters[/font]

[font=‘times new roman’]I would use both Days included in the analysis but you need to pay attention at the following:[/font]
[font=‘times new roman’]Between Day 1 and Day 8, make sure the time is not reset to 0. If you believe that at the end of each Day there is no drug anymore, you could reset the time to 0 on Day 8 but this then requires a lot of manipulation that can be done with the interface. Just do not do it and keep the real time since the first dose if non numerical issues arise because of the need to integrate from 0 to Day 8. If you want to reset, let me know and I will explain in details how to proceed.[/font]

2: The PK may have between occasion variability and in order to test that you can add between occasion variability by setting occasion 1 being Day 1 and occasion=2 on Day 8. Again if you do not know how to do it, please let me know and I will explain in details. Best would be to get the project.
If private contact Certara support to ask how to proceed i private and they would sen it to me if needed.

[font=‘times new roman’]2)[/font][font=‘times new roman’] I added Michaelis menten pathway in parallel with linear clearance into the builtin two compartment IV model by editing the graphical. The dose statement is automatically come out like this. Is there anyone can help explain the dosepoint statement? If there are no urine data, what is the function of urinecpt statement (one exptra urinecpt was generated if I add one elimination compartment)?[/font]
[font=Helvetica]
[font=‘times new roman’] In the dosepoint statement, pay attention only to the first part[/font][/font][font=Helvetica]
[font=‘times new roman’]dosepoint(A1) [/font][/font][font=Helvetica]
[font=‘times new roman’]You do not need the remaining part. It was added to have some compatibility with Winnonlin but really is not necessary. It just means that the dose was given in the A1 compartment(Plasma compartment here)[/font][/font]
[font=Helvetica]
[font=‘times new roman’] urinecpt(A0 = (VMax * C / (C + Km)))[/font][/font][font=Helvetica]
[font=‘times new roman’] urinecpt(A3 = (CL * C)[/font][/font]

The urine statement is not necessary if you do not have urine data. Nothing will happen if you shift to textual and delete it.

It is in fact a differential equation where you can see for A0 the accumulation of drug in urine through the non linear path while the A3 shows you the accumulation through the linear path.
It can be useful to just know the total amount at each time that is accumulated from each path but again is not mandatory to have.

epsilon shrinkage: It is computed as 1 - standard deviation
(IWRES) for the IWRES values shown on the Residuals worksheet.

I believe it is only a numerical or precision issue here and it means tat there is very small shrinkage. When you have very sparse data (like one observation per patient), you will see high shrinkage as all the individual fits have the tendency to be too perfect (1 sample would often lead to predicted~observed). I would not worry about the small negative value. Means no shrinkage

[font=‘times new roman’]4) In Nonmem, $OMEGA and $SIGMA are needed in the control stream for the inter-individual and residual unexplained variability. I didn’t see Phoenix has that function for inter-individual variability.[/font]

[font=‘times new roman’]$OMEGA for input in NLME can be found by clicking on the random effect tab and you will the variance covariance matrix you can fill. The default is 1 and diagonal[/font]

$SIGMA in NONMEM is the variance of the residual error while we have CEps being the standard deviation of the residual error in Phoenix.

In the the result tab, the table omega will give you the output of the variance covariance matrix

The table theta will give you as the last parameter(s) all the model parameters linked to the error model.
If for example you gave multiplicative error, you will have only one CEps.

[font=‘times new roman’]If I use the proportional residual error model, there are two statements[/font][font=Helvetica]
[font=‘times new roman’] [error(CEps = 1); observe(CObs = C * (1 + CEps))].[/font][/font][font=Helvetica]
[font=‘times new roman’]Here CEps is an initial absolute value (1 in this case), NOT variance like NONMEM (SIGMA estimate)?[/font][/font]
[font=Helvetica]
[font=‘times new roman’]CEps=1 is a standard deviation but plays the role of a cv because of the way the error is defined.[/font][/font]

error(CEps=1) means that you have an error model with the observations being randomly sampled as follows:
Nature drew a random sample from a normal distribution with mean 0 and standard deviation CEps(that is error=CEps),

In the second statement, CEps is the value of that random sample (not the standard deviation).

Now you calculate CObs=C*(1+CEps), this will give you a random sample from your observation distribution which appears your first observation in your data set. Same is done for each observation and patient and that is how you got your random overall sample (data set)

[font=‘times new roman’]5) I check the core output and found there are condition number, shrinkage etc. I wonder does these parameters carry the same meaning like NONMEM. How people evaluate if the model is good or not from Phoenix (return code is one of the criteria?).[/font]

[font=‘times new roman’]They should share the same meaning.[/font]

Model goodness of fit can be assessed by looking at the diagnostics that are shown in the result tab. The return code just tells you if the optimizer was able to converge or not under different criteria (1 to 4 usually are accepted). Model can be very bad and still converge. I would not use the return code as the way to define our model as acceptable.

CWRES is the diagnostic I would start with (applies only if you have random effects). The CWRES ve time or average predicted concentration should be normally distributed with mean 0 and variance=1.
Visually you should see roughly about 95% of the CWRES values between -2 and 2, the blue line around 0 (~average line of your CWRES), the red lines being around 0.7 and -0.7 (standard deviation of the absolute CWRES values). If it does not look good, you may have model misspecification or convergence issues.

To feel good about the right processes used to define your model, look at DV vs IPRED. The scatter points should be randomly distributed around the unity line. If it is not the case, you may have model misspecification or your model did not converge at all. Then look at DV vs PRED, same things, scatter points should be randomly distributed around the unity line. If it is not the case, you cn may be have non linear processes and this usually is not seen when just Plotting DV vs Ipred. You need to look into and check for non linearity in some of the processes.

Copy your model to the workflow and accept all fixed and random and shift run options to the last OPtion (SIM/PRED). You will get the VPC (applies only if you have random effects).

The VPC should give you 5%, 50% and 95% confidence bounds). The program superimposes the confidence bounds with the observed data.

Observed data should be randomly centered around the 50% confidence bound and roughly 90% of your data should be between the 5% and 95% quantile.

[font=Helvetica][font=‘times new roman’]6) Occasion 1: If population box? is cleared, the running method will automatically change to Naïve Pooled. [/font][/font][font=‘times new roman’]Occasion 2: If I started with Naïve pooled method, and then clear the population box.[/font][font=Helvetica]
[font=‘times new roman’]What will be the difference between these two occasions?[/font][/font]
[font=Helvetica]
[font=‘times new roman’]If you clear the population box but sort by individual, the program will take each individual separately and perform an individual fit.[/font][/font]

If you do not clear the population box and still use naive pool. it is a population fit where all the data are taken in the one analysis but the assumption is that they share the same parameters

Last thought, I think it would be very useful to go to one of the population courses Certara provides as I can see how you could benefit . I would suggest the intermediate and/or advanced as you seem to have the appropriate skills to go to more advanced courses.
Best Regards
Serge

e.g. http://www.certarauniversity.com/lms/index.php?r=course/details&id=30

Dear Serge,

I appreciated your detailed response. Our department plans to have a group training on site next year. I have some further questions regarding some points.

I would use both Days included in the analysis but you need to pay attention at the following:

Between Day 1 and Day 8, make sure the time is not reset to 0. If you believe that at the end of each Day there is no drug anymore, you could reset the time to 0 on Day 8 but this then requires a lot of manipulation that can be done with the interface. Just do not do it and keep the real time since the first dose if non numerical issues arise because of the need to integrate from 0 to Day 8. If you want to reset, let me know and I will explain in details how to proceed.

I am not sure if I fully understand your points. But here are the time points in the dataset: 0 (dosing point) 0.25, 1, 4, 8, 24, 48, 96, 167, 168 (dosing point), 169, 176, 192, 216, 288, 336 h. So drug accumulating will be considered whether it is in SS or not (no SS was reached in this case).

2: The PK may have between occasion variability and in order to test that you can add between occasion variability by setting occasion 1 being Day 1 and occasion=2 on Day 8. Again if you do not know how to do it, please let me know and I will explain in details. Best would be to get the project.
If private contact Certara support to ask how to proceed i private and they would sen it to me if needed.

I was thinking to add between occasion variability at the beginning, will 300 observations be too less to add one more parameter ( already have 5 parameters: V, V2, Km, Vmax, Q, Cl). Naïve pooled method was used due to small data and many parameters.

epsilon shrinkage: It is computed as 1 - standard deviation
(IWRES) for the IWRES values shown on the Residuals worksheet.

I believe it is only a numerical or precision issue here and it means tat there is very small shrinkage. When you have very sparse data (like one observation per patient), you will see high shrinkage as all the individual fits have the tendency to be too perfect (1 sample would often lead to predicted~observed). I would not worry about the small negative value. Means no shrinkage

In the residuals worksheet, how does weight definite? How to calculated IWRES, WRES, CWRES?

If shrinkage=1-standard deviation, is standard deviation equal to IWRES? How exactly the shrinkage was calculated?

Is the negative shrinkage in this case because I use naïve pooled method (no individual prediction, treat all observations from one individual). So there is no shrinkage.

CEps=1 is a standard deviation but plays the role of a cv because of the way the error is defined.

error(CEps=1) means that you have an error model with the observations being randomly sampled as follows:
Nature drew a random sample from a normal distribution with mean 0 and standard deviation CEps(that is error=CEps),

In the second statement, CEps is the value of that random sample (not the standard deviation).

Now you calculate CObs=C*(1+CEps), this will give you a random sample from your observation distribution which appears your first observation in your data set. Same is done for each observation and patient and that is how you got your random overall sample (data set)

NONMEM have $OMEGA to specify initial variance estimate (square root of OMEGA=CV% if proportional model was used) and people have to give an initial value of OMEGA for each ETA. NONMEM also have OMEGA Block.

In phoenix, the variance covariance matrix from the Random effects tab could be used to set both $OMEGA and $OMEGA Block? If the default is 1(CEps =1), does that mean CV% for inter-individual variability is 100%.

In NONMEM, $SIGMA is the variance of the residual error, EPS is used as errors in Y=F*(1+EPS).

In Phoenix, error(CEps = 1) means the errors distributed with mean 0 and standard deviation CEps? What does the value 1 mean?

observe(CObs = C * (1 + CEps)). Here the CEps is not the same one in the error statement??

e.g. http://www.certarauniversity.com/lms/index.php?r=course/details&id=30