My name is Saulius Šimčikas. I spent the last year on a career break and now I'm looking for new opportunities. Previously, I worked as an animal advocacy researcher at Rethink Priorities for four years. I also did some earning-to-give as a programmer, did some EA community building, and was a research intern at Animal Charity Evaluators. I love meditation and talking about emotions.
Tell me what you want me to do with my life, especially if you can pay me for it.
What worries me even more is how AI will amplify this. We might soon have personalized AI content designed to be even more addictive. Individual echo chambers crafted by AI to maximize engagement. Right now, AI mostly selects existing content to recommend—but soon, it could create content directly for each user, optimized purely for engagement.
Hi Yaroslav. That’s a touching story. I read your LessWrong post and the first page of your website. I think the reason you’re struggling to get feedback might have to do with how your ideas are presented.
Your LessWrong post starts with high-level reflections and personal experiences, and only near the end briefly describes what your actual product is. But even after reading it, I’m still not sure what the product does. It seems to be some kind of programming tool or language—but how would someone use it? What can it do that other tools can’t? Why would a developer want to use it?
That’s not a criticism of the ideas themselves—it’s just a communication gap, and those are solvable. I’d recommend starting with something like an elevator pitch—just 1–2 sentences that clearly explain what the product is, who it’s for, and why it’s exciting. There are lots of good materials online about writing elevator pitches, and even LLMs can help generate one if you feed them the right structure.
And beyond that, I’d focus on describing concrete use cases. Even if the product isn’t ready for them yet, people need to imagine what they could do with it. Right now, there’s a big gap between the high-level vision (“compete with AGI”) and the technical details (like AVL-tree example), with very little in between.
Also, I’m not sure LessWrong is the right audience. You might have better luck reaching out to communities interested in new programming languages, formal methods, or open-source developer tooling. ChatGPT suggested places like Hacker News, r/ProgrammingLanguages, and IndieHackers.
Finally, I think the idea of “humans becoming superintelligent” is intriguing but maybe too ambiguous. If you mean “augmenting human cognition through tooling,” that’s a very interesting and valuable direction. But it might help to use more precise language to avoid confusion with the more common definition of superintelligence (i.e., vastly beyond human capability in all domains).
Hope some of this is helpful! You’ve clearly a lot of thought and work into this, and that kind of persistence is rare. Whatever happens with this particular project, the mindset and skills you’re building will carry forward. Wishing you strength and luck as you take the next steps!
Are you also concerned about other interventions outside vegan advocacy which push for the replacement of animal-based with plant-based foods?
Yes, the same argument applies for other types of reduction of animal products, especially beef. Chickens tend to use the much less cropland per calorie, reformed or not. I'm not so much concerned, as I'm resigned about figuring out whether decreasing meat consumption is good or bad. It's almost surely good for farmed animals, I'd give say 55% that it's bad for wild animals. But then there is also impact on the environment (like global warming) which could also be a factor for x-risks and stuff. But I'm not even that sure that some x-risks are bad from a utilitarian POV. Also vegan advocacy might also increase moral circle expansion. But even that could be bad. For example, if people care more about animals, maybe they will care more about preserving natural habitats, which might contain a lot of suffering. There are so many factors that go into all kinds of directions. We're clueless.
For me, chicken welfare reforms look like an unusually good bet in this uncertain world. They help big farmed animals, reduce the populations of small wild animals, and maybe increase moral circle expansion a bit. All of these seem likely good. They do harm the environment, but it's a relatively small effect, and I think it can be outweighed by donating a little to some environmental charity. So to me, chicken welfare reforms look good from many different worldviews.
Charities that help invertebrates that you mentioned seem very good as well from many perspectives. But we are clueless about their long-term effects too.
It would be nice if the Welfare Footprint Institute (WFI) determined the time in pain and pleasure of for the most abundant species of terrestrial nematodes, mites, and sprintails, which are the most numerous terrestrial animals.
WFI looks at farmed animals that are farmed in a consistent way and in places where we can easily observe lives of individuals from beginning to the end. This sounds like a very different and a much much much more complex project.
And even if we got precise WFI estimates for all species, we still might disagree about whether increasing wild animal populations is good or bad because disagreements about how to weigh:
I think it’s difficult to improve on the handwavy argument that maybe wild animals suffer more, so we are better off if there are fewer of them. I think that people who care about small invertebrates are probably better off supporting invertebrate charities that you mentioned than funding such complex research project, which might not end up changing the behaviour of that many people (unless it changes Open Philanthropy's grantmaking).
Btw, I think it’s unlikely that nematodes are sentient because they are so simple. The most commonly studied one has like 300 neurons. But I see they are excluded from your estimate anyway because they are not arthropods.
I try to maximise happiness (in the broadest meaning of the word), and to minimise suffering (again, in the broadest meaning of the word). Goodharting would be to say that by far the best outcome for my values would be to turn everything in the universe into hedonium (a homogeneous substance with limited consciousness, which is in a constant state of supreme bliss). That doesn't sound like a great outcome to me, so yes, it can be goodharted. It shows that my actually values are more complex than just caring about happiness and suffering. But it is usually a good-enough proxy for what I want.
Personally, I assume that it's more likely that arthropods live net negative lives. They are mostly r-selected, so most of them die soon after birth, possibly painfully. So in terms of short-term impact on animal welfare, I see it as a tentative positive that welfare reforms likely decrease wild animal numbers. If I understand it correctly, you see it as a tentative negative. I'd be interested to know why.
On the other hand, I see it as a bad thing that vegan advocacy probably seriously increases wild animal numbers. But I'm unsure about how to weight this against environmental concerns. And I'm very unsure if wild animals' lives are net negative overall, but I slightly lean towards a yes.
I’m thinking that it might be worthwhile to lobby AI companies to change how their language models discuss their own consciousness.
I upvoted the article, it makes good points. But personally, I will mostly continue treating insects as moderately important. Your article implicitly assumes pure utilitarianism. Utilitarian calculations play an important role in my decision-making, but I don't listen to them religiously. If I did, then there might still be more important things than insect suffering.
For example, I once thought that the conclusion of utilitarianism is that we should try to turn everything in the universe into hedonium (a homogeneous substance with limited consciousness, which is in a constant state of supreme bliss), even if our chances of success are minuscule (I see someone else argued for it here). But then I realised that I'm just not excited about that. So I concluded that I'm not a pure utilitarian. This argument about insects also makes me feel like I'm not a pure utilitarian.
shown by the beautifully scribbled light blue area
The scribble is indeed very beautiful.
In your graph above, it looks like impact for a lot more than one year. I assume it's something like this:
The red line here is what would've happened without Stop The Farms campaign, and blue line shows that it's different for a little while with the campaign. But I assume that the market soon (like within a year) returns back to the same growth trajectory, and it's as if we never did anything, except that maybe farms are build in a different country. Chicken production is growing and I don't think this will change in the relevant timeframe of few years.
Many in EA focus on preventing a future self-improving superintelligent agent that might pursue some alien goal misaligned with human values. But this podcast made me realise that such an agent already exists—not as a conscious entity, but as an emergent, decentralized system. It’s what Scott Alexander called Moloch: the dynamics of markets, algorithms, status games, and incentive structures that collectively form a kind of self-improving, misaligned intelligence.
Screen time is one of the proxy goals it optimises for—not because anyone chose it, but because attention is monetisable. And now, Moloch is building more powerful AI, which risks accelerating its agenda, including screentime. A generation raised like this could bring us closer to something like Idiocracy—a society overwhelmed by problems, but cognitively unequipped to solve them. Maybe reducing harmful-type screentime isn’t just a public health move, maybe it’s part of fighting back.