Comparison with NONMEM

Hi, Have anyone ever compared the results of NONMEM MAP individual parameters with NCA and/or compartmental PK analysis using WinNonlin? I found that CLs are quite close, but the Vds are very different. Can anyone give some comments on this? Thank you!

Mindy, you aren’t really comparing like with like when you compare individual analyses with compartmental, and certainly NCA (think of the difficulty of capturing Cmax and therefor Volume with the latter). A more meaningful comparison would be Phoenix’s NLME engine’s results to NONMEM, as these both use population approaches but specific differences are going to be in part due to the problem, data and approach used. If you’re not familiar with the differences between the WNL Classic engine and the Phoenix Model Object Naïve Pooled engine, then in brief they will yield very similar results for individual modelling, that is, when the fits are good (standard errors are small or the confidence intervals around the estimates are narrow). They won’t yield precisely the same values, but during informal testing, were within half of a percent. The PHX naive pooled results are true maximum likelihood estimators, whereas the WNL classic results are based on an iterated weighted least squares algorithm that usually comes close to a maximum likelihood solution when the fits are good, but may be significantly different for poor fits. Simon.

Hi, Simon: Thank you very much for your reply and nice discussion. It helped me a lot to understand the problem. One more question I wish to get your comments if I may. I found that the WNL Classic PK analysis is quite sensitive to the initial estimations. With the same data but different initial estimations, I got different results. I believe that it is at least parially due to the lack of information (data are not rich enough). Does this imply that the WNL Classic engine may not be suitable for these particular data? Thank you! Mindy

that could be the case or it might be the model is mis-specified. If you have time it might be interesting ot fit the same date in the Phoenix model, either simply as a naive pool, or if you have the NLME license choosing one of those algorithms. Simon.

Hi, Simon: Thank you very much for your prompt response and suggestions. Following your suggestions, I did Naive pool as well as the NLME (FOCE_ELS) analysis. The results are interesting: Naive Pool and NLME have similar results, but all parameter estimations (CL, V1, V2, Q) are higher than what I got from NONMEM. Particular, Q value is significantly higher than that from NONMEM (80 vs. 25). I am not sure how to interpret it. Have you compared the result from those 2 softwares and what are you observations? Another dummy question that I wish to get answer from you: the data I have is from an IV Infusion study. How to input the data to show an IV infusion in NLME? I tried to input the infusion rate as “A1 Rate” but does not seem to do the trick. I therefore have to ignore the infusion part but kind of treat it as an IV bolus. (please excuse me for this dummy question. I am new to NLM and I cannot find detailed instruction on this in the user guide) It is an interesting practise and fun to me. Thank you very much again. Mindy

Mindy, can you post your PHX project and NM code/results so we can try to see where these differences might arise? regarding the A1 Rate, did you have the box checked “infusions possible” on the Structure tab, again if we can see your project (or some subset of it) then we can help direct you better to what settings may need to change. Simon.

Hi, Simon: Thank you very much for your helpful messages, and sorry for not getting back to you ealier. I figured out that the problem I had with the infusion is due to a mistake I had in the data file. After the mistake was corrected, the results from NLME is reasonablly close to what I got from NONMEM. I plan to look at the results more but could not do so right now. But whenever I have time, I will look into the results more and hopfully to discuss more with you! Thank you very much again. I have been enjoyed learning from you! Have a great day! Mindy