- 80 Billion Neurons
- Posts
- 🤖 We're gonna do this, even if it kills us.
🤖 We're gonna do this, even if it kills us.
PLUS: No more voice notes, and right wing training data...

What’s up team!
We’re getting cooking with this Newsletter! Here’s what’s going on in the new world today,
Google’s decided to stream line AI Ops and get serious 💢
Investigating the training data behind LLM’s ✏️
Transcribing Voice notes 🎤
🤖 Alphabet merges DeepMind and Google Brain AI research units
The move combines DeepMind a google Acquisition with in-house research team in a bid to compete with Open AI et al. (Source)
🤖 White Nationalist Material found in Training Data
The C4 Dataset, a combination of 15m websites used to train Large Language Models has been found to contain references to RT, VADARE & Breitbart. This data was then used in models like Bard & LLaMA (Meta). It comes as other valuable sources like Stack Overflow have decided to charge for their intel. (Source)
🤖 Woman's bowel cancer spotted by artificial intelligence
Sensational News. A woman who was part of a study using artificial intelligence (AI) to detect bowel cancer is free of the disease after it was found and removed. 2,000 patients were recruited for the trial which uses AI to spot concerning tissue that could be missed by the human eye. (Source)
App of the Day
We’ve all got that one friend who obnoxiously loves the sound of their own voice on WhatsApp and is constantly droning on.
Even if you’re not as cynical as I am, there are sometimes you don’t want to listen to their soft tones.
Transcribe me uses AI to turn voice notes into text, it’s free, doesn’t save any of your personal data and can integrate with WhatsApp or Telegram at the moment.
Better yet, you don’t even need to download an app, just add Transcribe me to your messages!

Going Deep ⏬
A couple weeks ago, Anthropic announced a $5bn, 4 year plan to take on open AI. Why does this matter?
It matters for a couple reasons.
Anthropic was founded by a team of ex-openAI employees with the goal of better understanding the impact of LLMs on society, and putting in safe guards. to prevent abuse.
The started this company because of the belief that AI was not ready for productisation. But just two years later, they’ve pushed out Claude.
It’s a similar trend from Google, who were caught behind on releasing their customer facing LLM Bard as they feared the technology was poorly understood and lacked necessary safeguards.
These software companies are spending billions on AI advancement.
All AI companies are converging at the same point.
It’s economically unstable to just be a research lab. It’s going to cost $5bn to get to the next version of Claude. Without monetisation at this point, it’s a difficult funding conversation.
The next 2 years will define the next 20.
Companies are vying to be at the cutting edge of R&D. Where they get to in the next 2 years defines where they’ll play for the next 20. Once you can get to the next 2 generations of Claude, Bard, GPT the model will be able to give business strategy, perform actions, return outputs unmatched in the rest of the industry.
Even if the safe guards are not ready, it’s still a risk these labs are taking. Even labs which were founded with the sole purpose of charting a slow steady course.