• 0 Posts
  • 4 Comments
Joined 11 months ago
cake
Cake day: October 27th, 2023

help-circle
  • There’s no real growth opportunity for you at this job. No way for you to learn / level up anything but your confidence and independence.

    It’s unusual, but not that unusual on the lower pay end. Normally a company would hire someone a little more senior to do the unsupervised-team-of-one thing. But they went cheap.

    It sounds like they’ve made a bet that this product is worth $X, and doing it faster, better, or more robustly won’t impact that.

    It’s probably better for you to finish the product and leave, but only for the story and the closure. You should be searching for your next job now. Find something where you’ll be part of a team - you’ll learn a lot more.




  • When I did my PhD, starting around 20 years ago, things were different. Feature engineering within a data domain was a pretty common way to specialize. Neural networks were old fashioned function approximators that went out with shoulder pads. The future was structred Bayes nets, or whatever was going to replace conditional random fields.

    I’d listen to my PIs talk about how things were different - how I was leaning too much on the power of models to just learn things, and I had to focus more on the precise choice of model. When I pointed out that given the right feature engineering, the models basically performed the same, they’d dismiss that, saying I was leaving a lot on the table by not deriving a more fit-to-purpose model.

    These days, I look at the work the junior modelers I supervise perform, and I urge them to at least look at the data, because you can’t really understand the problem you’re working without getting your hands on the data. But they’d rather focus on aggregate performance metrics than examine their outliers. After all, with large datasets, you may not be able to even look at all your outliers to recognize patterns. And how could you do as good a job as one of today’s giant networks?

    Then there’s LLMs, where you may be handed something that can already basically solve your problem. That flips even more ways of working on their ear.

    But the fact is, these patterns repeat.

    You’re going to study something that won’t be as relevant in 20 years. That’s the way of all rapidly moving fields. And it’s okay. All code ever written will one day be dust. Even if we don’t like to admit it, the same is true of every publication. Industry or Academia, you build things that move us forward, slowly, in fits and starts. That’s the broader game you opt to play in this field.

    ML will change. So will everything else.