Different %CV between Phoenix and Classic Winnolin PK model

Dear Phoenix Users,

I have used two different approaches to characterize the pk data obtained after sc administration of a drug in mice:

  1. 2 cmt phoenix model

  2. 2 cmt classic winnolin model.

Both the models provide reasonable fit to the data. However, the parameter estimates from phoenix model have very high %CV.

I would appreciate if someone could explain me the reason behind it.

Thanks and regards,

Chetan

balbc pk.phxproj (573 KB)

Dear Chetan

The old winnonlin engine is not maximum likelihood as far as I know. The ift is not as good as the one obtained with Phoenix engine which uses maximum likelihood optimization procedure. I strongly suggest you to stop using the winnonlion engine we do nto support anymore. By the way these are the winnonlion cv that are huge, not the Phoenix one. You need to look at the % error, not the absolute error.

best

Serge

Dear Serge,

Thanks for your reply. It was the winnonlin model which gave high %CV.Thanks for correcting me. Its interesting to know that minimization algorithms can have such a huge impact on precision of parameter estimates.

Best,

Chetan

Normally the CV% are similar between these two tools, as well as the fits. To compare the two tools, be sure that you are using individual modeling with the Phoenix Model object, with the Naïve Pooled engine. We would need to be able to look at your project to comment further on the differences. If you don’t want to post your data on the forum, your project could be sent to Support (Pharsight.Support@Certara.com).

Regards,

Linda Hughes

Normally the CV% are similar between these two tools, as well as the fits. To compare the two tools, be sure that you are using individual modeling with the Phoenix Model object, with the Naïve Pooled engine. We would need to be able to look at your project to comment further on the differences. If you don’t want to post your data on the forum, your project could be sent to Support (Pharsight.Support@Certara.com).

Regards,

Linda Hughes

Hi Linda,

I have already attached my phoenix file which includes the dataset in my first email.

Thanks. Sorry I missed that. It is not a lot of data (9 points) to fit 5 parameters. I will take a look at it.

I’ve tried some different options but haven’t been able to get a stable fit with the WNL Classic engine. I think the fitting process is struggling with the small number of data points and the difference in the scale of the data. The Phoenix Model object was more robust in this case. I will ask someone else about it, but many people are on vacation this week.

Chetan

In the file you attached, you used a Cl parameterization for WNL classic and a micro constant parameterization for the Phoenix model. In addition, you used a dose of 200,000 for classic WNL and a dose of 4,000 for the Phoenix model. If you use the same parameterization and dose for both models you will likely get similar results.

Dan Weiner

Hi Chetan,

We’ve had some discussion on your project here, and I wanted to follow up and give you the attached project. Dan parameterized the model in terms of clearance for both the Phoenix Model and WNL Classic, and got a nice match in the parameter values, and also pointed out that WNL Classic is more stable with the clearance parameterization.

Bob also reminded me that REML adjusts for degrees of freedom so the standard errors and CV% do not match between the REML (WNL Classic) and ML (Phoenix Model), unless you figure out what ratio you need to multiply by to adjust for this difference in methods: From Bob:

There are 9 observations and 5 fitted parameters (other than sigma). So there are 9-5=4 dof. If you adjust for dof (which REML does), you will get an estimate of the std deviation of EPS which is sqrt(9/4) = 3/2 = 150% of the maximum likelihood estimate of the std deviation of EPS. Indeed, if you look at the parameters as listed in the Core Output file for the Phoenix Model for the Ke parameterization, it gives both the ML estimate (EPS_STD_DEV=0.143), and the REML estimate (EPS_STD_DEV_WNL = 0.215 ) (i.e., this shows this 1.5 ratio). Note that the Cl parameterization gives identical estimates for V and EPS and std error of V as the Ke parameterization.

0.32890933E+01 1 1 THETA BOUNDS: -0.100+101 0.100+101

0.79139570E-02 2 1 THETA BOUNDS: -0.100+101 0.100+101

0.32045903E+01 3 1 THETA BOUNDS: -0.100+101 0.100+101

0.82420337E-01 4 1 THETA BOUNDS: -0.100+101 0.100+101

0.81057477E-01 5 1 THETA BOUNDS: -0.100+101 0.100+101

0.14374254E+00 1 1 EPS_STD_DEV BOUNDS: 0.000E+00 0.100+101

0.20661917E-01 1 1 EPS_VARIANCE

0.67361710E+02 1 1 -LOGLIKE

0.11818253E+03 1 1 ELSOBJ

0.21561380E+00 1 1 EPS_STD_DEV_WNL

0.46489312E-01 1 1 EPS_VARIANCE_WNL

For the Ke parameterized model, the CV of V is 14.1%, while for the Cl parameterization it is also 14.1% (using the maximum likelihood EPS). The REML value will be 50% higher. In fact, for both the Ke and Cl parameterizations, identical V=.00791 estimates are obtained, and identical std errors of V are of .00112 are obtained, as theory predicts for the single subject case. Also as theory predicts, the -2LL values are identical at 134.72 for the two different parameterizations (changing the parameterization only matters in the population case).

ClearanceParameters.phxproj (1.1 MB)