Web Analytics
Letter: Big Tech breaks the law — that’s usually how it starts | <i>Can</i> <i>Başkent</i>
Can Başkent's website
Banner for Can Baskent
  • This is the website of Can Başkent the logician ● Contact / İletişim 0

Can Başkent

LETTER: BIG TECH BREAKS THE LAW — THAT'S USUALLY HOW IT STARTS

CAN BAŞKENT

Kate Mosse (“AI’s assault on our intellectual property must be stopped”, Opinion, FT Weekend, December 21) is right. Large language models (LLMs), with the likes of ChatGPT, are, in practice, machine learning systems which require a lot of data and a lot of processing power. Advances in technology made it possible to have the processing power in the form of the GPU or graphics processing unit chips. You only have to think of the recent stock market price rise of a company like Nvidia, for example.

Data, however, is a different story. We need loads of data to “train” LLMs. And in most cases, this data is owned by someone, and is comprised of newspapers, research papers, novels, poetry, photography that we see all around us. And Mosse is correct, Big Tech is stealing this data. Because paying for it would make the advances in LLMs practically impossible. LLMs wouldn’t be popular; they wouldn’t be part of the common lexicon; and they wouldn’t be in everyone’s phone.

It is a common story for those of us who watch the tech sector critically. The infamous gangster Al Capone, when he was a child, used to pray every day for a bicycle. Then, he realised, God does not work in this way. So, instead, he stole one, and started praying for God to forgive him.

Think of every life-changing invention of Big Tech in the past decades, from Uber to Facebook. What happens is first they break the law — just remember the problems Uber created — and then they start working towards fixing the problem.

It is upsetting to realise that, as a society, we haven’t learnt anything from past mistakes and keep repeating them with LLMs.