I'm pretty skeptical of all the hype that's been created for LLMs. That doesn't mean I'm skeptical of their utility entirely though. There are lots of good things to look forward to with these tools. Here are some of my favorites.
Hiring practices for software developers may improve
Most engineers hate leetcode interviews. They don't see it as representative of what they actually need to do for their job. With few exceptions, that is actually correct. I've been in a leetcode interview where the interviewer actually said "Yeah, lots of people struggle with this because no one uses this in real life."
So it is widely understood to be poor practice that does not help you make good hiring decisions. Why does everyone still use it? I'd say because doing what everyone else is doing is safe. If you use a leetcode interview and make a bad hire, no one can blame you because you're using the same process that Google and Facebook use. Those companies are obvious successes and people generally won't question you emulating them.
If you don't use a leetcode interview, you probably created the interview question yourself. Maybe it makes it more likely to make a good hire. If you make a bad hire though, all the blame falls on you. You're not emulating anyone else's success. You're taking a risk doing your own thing.
Given that incentive, it makes sense that a lot of people in charge of hiring would pick the safe option. However, LLMs are about to make leetcode interviews a lot less safe.
If everyone starts using ChatGPT or Claude on interviews and they can answer the interview questions correctly, it quickly removes any worth from those questions. I'd argue there was little worth to leetcode questions to begin with, but it would be a debate. Some people would argue that there was value. LLMs make that debate moot by invalidating every test.
Without a safe option, everyone is going to have to come up with something new. We'll see a lot more experimentation with interview questions and hopefully the best ones will surface. In a world of LLMs solving leetcode questions easily, software developers may be evaluated for their ability to do the job rather than their ability to pass an interview.
The playing field for science may get leveled
We'd like to think that a field as logical as science would be entirely merit based. That's easy to say for anyone who is a native English speaker such as myself (and likely most of you reading this). English is still lingua franca globally though and that means most science is done in English. The writing of theories is in English. Results of experiments are published in English. Critiques of past studies is in English.
If you're not fluent in English, your ideas are more likely to be misunderstood because you used the wrong words. Or people reviewing your paper may just not understand what you're getting at and assume it isn't worth their time. From The Economist:
The technology can also help level a playing-field that is tilted towards native English speakers, because many of the prestigious journals are in their tongue. LLMs can help those who do not speak the language well to translate and edit their text. Thanks to LLMs, scientists everywhere should be able to disseminate their findings more easily, and be judged by the brilliance of their ideas and ingeniousness of their research, rather than their skill in avoiding dangling modifiers.
By helping brilliant scientists who struggle with English, LLMs may make science more merit based. Better science is better for all of us.
SEO may finally die
Everyone in tech talks about SEO at some point or another. That doesn't mean SEO is good. SEO is about gaming search engine algorithms to make some results rise to the top. SEO is not about helping people who use search engines find what they need faster.
How many times have you searched for something and had to crawl through pages that used a lot of words that said nothing? Or included information that was completely unnecessary for the purpose of your search? How many times have you been forced to go to, *egads*, the second page of Google?
I have an unpublished post (that will likely never get published) describing my skepticism of LLMs being a threat to search. I will never publish that because I've since realized how wrong I was. I have found things in minutes from ChatGPT that used to take me 2-3 hours of crawling through search results and trying different terms. ChatGPT has often given me the info I am looking for that I could never even find in search engine results. That's probably because the training data included the 10th page of Google, which I have only resorted to a handful of times in my life.
ChatGPT isn't perfect. There are still some searches where Google or Bing do a better job of getting information. But by lessening the value of SEO, it will likely reduce the incentive for people to create SEO fluff. That can only improve both search engines and LLMs that use the web as training data.
That being said, I have heard of some rumblings about creating SEO content for AI. The idea is to create web pages that when used as training data will surface information that the creator of those pages wants surfaced. I hope those efforts fail. Hard.
Those are three things where I think LLMs will make things better for all of us. What are you looking forward to?