I run Phoenix on a 4-year-old Lenovo Core i5-2400 (quad-core 3.1GHz) ThinkCentre, with 16GB of RAM. I find (anecdotally, I admit) that it runs faster with MPI enabled for most work. However, I also find that the little window in the lower right-hand-side turns yellow at about 1GB, then amber at 1.25GB, and red at 1.5GB. I’ve learned from bitter experience than operating in the yellow zone for any length of time runs the risk of suddenly seeing “Phoenix has stopped working. Click here to end application” in a pop-up window. That seems a little odd for a machine with so much left over, and little else running there beyond Windows-7 and Outlook (it’s for my exclusive use). Considering that even a simple workflow quickly takes up 0.4GB, and that I’ll often be working on 3-4 at a time, I need to be careful.
My work right now is less about large pop-PK models, and more about cranking through “multiple small replicates” (same structural model, different compound/cell-line combinations). While the “Sort” button allows me to run the same model repeatedly, sorted by some covariate, the different values do not appear to distribute over the cores; they are handled sequentially. I’d really like to be able to speed things up, and may have the opportunity to upgrade my machine soon, but since CPU speeds haven’t gone anywhere in recent years, memory and hyperthreading appear to be the only way to go. (Xeon seems like an obvious step for hyperthreading, possibly dual CPU or 8-core). But if Phoenix is limited to 1.5GB, I may already have reached the limit of what I can do with memory, and since “Sort” doesn’t distribute over threads/CPUs, I may be stonewalled there too. I won’t upgrade unless I can reasonably expect performance improvements.
Can anyone share any insight into Pheonix’s limitations in that regards, and perhaps the optimal path forward?
Many thanks,
-Frank