
I have been that cinderblock for my dad many a time.
If the women don’t find you handsome, at least they can find you handy.
I have been that cinderblock for my dad many a time.
If the women don’t find you handsome, at least they can find you handy.
Not trying to be rude, but that’s a question of how the engine uses the CPU vs GPU implementation, not a measure of apples to apples.
Comparing modern games with CPU particle physics to the heyday of GPU Physx there is no comparison. CPU physics (and Physx) are more accurate, less buggy, and generally not impactful in performance.
I mean, does it work worse? UE4/Havok and Unigine all use CPU Physx. And every other engine I know of uses a custom particle physics implementation and seem far better at it than GPU Physx ever was.
On GPU I remember physx being super buggy since the GPU calculations were very low precision, and that was if you had an Nvidia card. It made AMD cards borderline unplayable in many games that were doing extensive particle physics for no other reason than to punish AMD in benchmarks.
Meh. Physx emulation on CPU has been outstripping hardware implementations for a while as far as I know.
Nvidia dropping a portfolio item to open source appears to only happen once they’ve milked it to death first.
Bethesda was notorious back in the day for using uncompressed textures. Not lossless textures, just fully uncompressed bitmaps. One of the first mods after every game release just compressed and dynamically decompressed these to get massive improvements in load times and memory management.