I’m a data engineer who somehow ended up as a software developer. So many of my friends are working now with the OpenAI api to add generative capabilities to their product, but they lack A LOT of context when it comes to how LLMs actually works.
This is why I started writing popular-science style articles that unpack AI concepts for software developers working on real-world application. It started kind of slow, honestly I wrote a bit too “brainy” for them, but now I’ve found a voice that resonance with this audience much better and I want to ramp up my writing cadence.
I would love to hear your thoughts about what concepts I should write about next?
What get you excited and you find hard to explain to someone with a different background?
Gonna toot my own research direction: artificial intelligence x complex systems. I’m talking differentiable self-organization (e.g., neural cellular automata), interacting particle systems (e.g., particle Lenia), and other neural dynamical systems where emergent behaviour and self-organization are key characteristics.
Other than Alex Mordvintsev and his co-authors, Sebastian Risi and his co-authors, and I suppose David Ha with his new company, I don’t see much work in this intersection of fields.
I think there’s a lot to unlock here, particularly if the task at hand benefits greatly from a decentralized and/or a compute-adaptive approach, with robustness requirements. Swarm Learning already comes to mind. Or generative modelling with/of complex systems, like decentralized flow (or Schrödinger) matching for modelling interacting particle systems (e.g., fluids, gasses, pedestrian traffic).
Last year when I heard from one of my friends, who is a medical data researcher from Harvard, that he and his colleagues were doing researches related to federated learning, I knew this topic gotta be trendy for the recent years
I think this is the most important topic in this thread so far.
Why is that? You’ve got me curious