How much memory is Phoenix able to use?

I run Phoenix on a 4-year-old Lenovo Core i5-2400 (quad-core 3.1GHz) ThinkCentre, with 16GB of RAM. I find (anecdotally, I admit) that it runs faster with MPI enabled for most work. However, I also find that the little window in the lower right-hand-side turns yellow at about 1GB, then amber at 1.25GB, and red at 1.5GB. I’ve learned from bitter experience than operating in the yellow zone for any length of time runs the risk of suddenly seeing “Phoenix has stopped working. Click here to end application” in a pop-up window. That seems a little odd for a machine with so much left over, and little else running there beyond Windows-7 and Outlook (it’s for my exclusive use). Considering that even a simple workflow quickly takes up 0.4GB, and that I’ll often be working on 3-4 at a time, I need to be careful.

My work right now is less about large pop-PK models, and more about cranking through “multiple small replicates” (same structural model, different compound/cell-line combinations). While the “Sort” button allows me to run the same model repeatedly, sorted by some covariate, the different values do not appear to distribute over the cores; they are handled sequentially. I’d really like to be able to speed things up, and may have the opportunity to upgrade my machine soon, but since CPU speeds haven’t gone anywhere in recent years, memory and hyperthreading appear to be the only way to go. (Xeon seems like an obvious step for hyperthreading, possibly dual CPU or 8-core). But if Phoenix is limited to 1.5GB, I may already have reached the limit of what I can do with memory, and since “Sort” doesn’t distribute over threads/CPUs, I may be stonewalled there too. I won’t upgrade unless I can reasonably expect performance improvements.

Can anyone share any insight into Pheonix’s limitations in that regards, and perhaps the optimal path forward?

Many thanks,

-Frank

Hi Frank,
I have a VM machine with 16 GB that I do work on. I can see big difference between say a 4 GB machine and 16 GB machine.

I have used the sort feature in the past where I simulate say 100 replicates of the same trial and then I want to analyse the resutls by replicates (sort fits by replicates so I fit each separately) to understand the range of possible outcomes from my simulation.

Things are indeed not parallell but sequential what you can do is to run on separate machines ( if you have more than one machine available to you) or you can use command line version of Phoenix models where you don’t use the interface rather you use the command line to write down your models and generate outputs. You will also need another scripting language like R or Python to do this automatically for you.

Phoenix can use more than 1.5 GB I have loaded datasets with millions of rows in it but you will not get any performance improvement when the bottle nexk is the fitting of your data i.e the nlme engine running to find the pop pk estimates you can gain time by using MPI like you do and by parallelizing the fits using R scripts and command line.

Regards,

Samer

Can anyone share any insight into Pheonix’s limitations in that regards, and perhaps the optimal path forward?

Many thanks,

Thanks Samer, that’s helpful.

For dealing with sequential modeling of small, structurally identical datasets like I outlined in my original post, it sounds like I should just live with it. I don’t have easy access to additional machines.

On the matter of that little “memory window” that turns yellow-amber-red once I go above 1GB, I’m still struggling to understand why that’s an issue on a 16GB machine with little else going on. You mentioned you’ve gone above 1.5GB without problems. Is there a hard-coded memory limit in Phoenix? (It would seem a little anachronistic, but I have to ask :wink:

-Frank