ChatGPT and AI: How will it impact climate tech and the wider world

Climate news
ChatGPT and AI: How will it impact climate tech and the wider world

Project successes


Michael Douglas - CRO, Onnu

ChatGPT and AI: How will it impact climate tech and the wider world

The team here at Onnu have a strong technology background and we, like seemingly the rest of world have been blown away by first impressions of ChatGPT.

Working in the carbon removal business, our work is heavy with research through many sources of scientific study and ever developing facts and trends, coupled with a need to communicate these often complex concepts to an audience who may be very new to the subject matter.

So we thought it would be useful to take a good look and share some thoughts about how we think it can assist with research, the communication of difficult concepts, and where the dangers may lie.

So, what is ChatGPT?

Well, who better to ask than ChatGPT itself…

It is offered, currently for free, from an company called OpenAI so anyone is able to go online and ask questions like the one shown above.

I get spoken answers from Alexa or Google, so what’s the difference?

ChatGPT is genuinely creating language and communicating concepts it has pulled from a variety of sources. Alexa and Google are reading out the the first search result they find.

And a part that is really amazing is the command of the language and the choice of style.

Like this mind blowing example:

It really represents a huge leap forward in AI generating really believable human language with nuance and style. In fact, educators are worried that it will be so convincing that students will be able to cheat on their essays and other written work. Interestingly, one of the things that people are looking out for to detect it is whether the language is too good. Humans typically are inconsistent with sentence length and are not always great at getting a point across- no such problems for ChatGPT.

Does it tell the truth?

This is the big question. Putting aside the creative applications like the Shakespeare one above, a lot of the perceived power is to answer questions quickly and accurately. But the information it is learning from is the internet, and if there is one thing that the internet isn’t exactly famous for its diligent attachment to facts.

At its best, it will qualify an answer when it’s not completely clear. For example:

But other times you notice that it is really sure of itself. Which when applied to a business context has more than a whiff of the over-zealous sales person turning grey areas into convenient facts.

One obvious way this knowledge shortfall can be highlighted is that the current version seemed to graduate in September 2021 and has no knowledge of the world since then, so this answer is wrong on almost all accounts.

It's about 18 month ahead of schedule for Johnsons’ resignation, and like many others, has no memory of Liz Truss.

Now, in its beta version this is forgivable, and it is reasonable to expect that as the technology progresses knowledge will become more and more current, but nonetheless, presenting this patently wrong answer as fact should be a warning of relying on it too unquestioningly.

What could be the impacts on the climate tech sector?


Starting with the positive, the quality of the language it generates is remarkable, and given that one of the challenges faced in communicating often quite scientific and complex concepts in layman’s terms, only the most professional of writers and marketers won’t appreciate a bit of help when they are struggling with a paragraph.

Outsourcing the entirety of content writing however, although tempting, would be pretty irresponsible. In an industry establishing itself to an ever widening audience with information crucial to planetary survival, integrity is key, and picking up web traffic at the expense of a proper communications strategy and fact-checking, is quite a serious ethical mis-step.


We are all very used to trawling through multiple search results, delving into scientific papers and other articles and hot takes to find the information we are looking for. It is very tempting to delegate that to our new friend, but as all students know, a key aspect of research is to check your sources. A response from ChatGPT offers no such diligence and instead gives you the answers it prefers, sometimes qualified, but sometimes presented as supremely confident fact.

ChatGPT is already proving a popular substitute to internet searches, but discretion is advised.

The Future

What is well understood is that this is just the beginning. This technology will get better, and we are already seeing very powerful AI models applied to images, audio and video. While nefarious applications of deep fakes grab all the headlines, in fact there are some hugely beneficial uses in society, such as improved communication for disabled people and ultimately the possibility of more effective source checking than humans tend to achieve.

In business, the massive stock photo industry is going to come under huge pressure as photo-real bespoke illustrations can be created at will. In fact, the image we used for this blog is made by AI with the instruction "AI saving the planet", and frankly is a lot more interesting than a stock photo. And the currently frustrating chatbots we deal with as customers could improve to a far more effective level changing business models and types of employment.

But the big unanswered question is the ethical one. If AI can construct coherent and compelling arguments for unsavoury things, or someone’s face or voice can be shown doing or saying something fictional, what are the mechanisms to stop them, that would exist in the human world it threatens to replace?

Back to insights

Let's talk