I agree, optimizing db calls in my experience has been low hanging fruit. But (in rails) going through and making your own poro + active model objects can give significant speed increases, especially in cases where the Rails overhead is too high.
Object instantiation and database serialization/deserialization seem to be a pain point that gets overlooked more than people realize.
But in this case I’m preaching to the choir on ruby app optimization.
Do you forsee projects like Rails needing to be rewritten in order to be more favorably JIT and interpreted by truffleruby & ruby jit?
More than once I run into db serialization performance issues. Postgres reports all nice and fast, yet requests are slow. It took me a while to figure out that activerecord was busy converting my data into one large query string, and then busy sending that string over the wire. The issue was a field using postgres jsonb and I was filling in a large array of text lines in some cases. Workarounds are easy but I never pinned down why it would take multiple 100‘s of milliseconds to serialize into sql.
Object instantiation and database serialization/deserialization seem to be a pain point that gets overlooked more than people realize.
But in this case I’m preaching to the choir on ruby app optimization.
Do you forsee projects like Rails needing to be rewritten in order to be more favorably JIT and interpreted by truffleruby & ruby jit?