Exploring the frontiers of nuclear and battery technology in 2026


From energy breakthroughs to making sense of AI, the year ahead looks big for tech.

With 2026 in full swing, there’s a lot going on in the tech world, from the ever-present hype over artificial intelligence (AI) to potential new mechanisms to generate and store energy.

Here are a few real, practical developments to look forward to.

Going nuclear

One of these is the prospect that the widespread use of ­nuclear energy may be just on the horizon – in greater numbers and at lower costs than ever before.

Fundamentally, nuclear plants function like most other forms of thermal energy production, that is, using heat from ­nuclear reactions to boil water, producing steam that spins ­turbines connected to generators to produce electricity.

While previously nuclear plants were expensive and slow to come online – and also had to be custom-­designed for specific locales – new developments in the field look towards standardised designs in the form of small modular reactors (SMRs).

SMRs are designed to be smaller and easier to manufacture, which would allow for faster ­production at lower costs. This sized-down nature of SMRs would also provide flexibility in their deployment.

This could perhaps be scaled down to power specific facilities, instead of entire cities. Other designs could even involve using coolants such as molten salt or liquid metal, a stark difference from standard plants, which use water that needs to be kept under extreme pressure to ­prevent evaporation.

Should a leak occur, a loss of pressure in a water-cooled ­system can rapidly reduce ­cooling effectiveness, increasing the risk of overheating and, in worst-case scenarios, a reactor meltdown. This is less of a ­concern with liquid metal and molten salt, which do not require containment in a high-pressure environment.

One trade off is that this ­smaller scale translates to lower total energy production, with a ­traditional nuclear energy plant producing upwards of 1,000MWe (megawatt electrical), and an SMR outputting up to 300MWe, though some can reach up to 600MWe, according to the World Nuclear Association.

American trade association, the Nuclear Energy Institute (NEI), further states that a single traditional nuclear energy plant is capable of powering about 760,000 homes. For context, the average American home uses 10,500kWh of electricity each year, based on 2023 data from the US Energy Information Administration.

Malaysia could also soon be primed to reap the benefits of new SMR developments in the energy space.

Bernama reported last August that the Energy Transition and Water Transformation Ministry is conducting a feasibility study on nuclear energy, including SMRs, to assess their potential.

Deputy Prime Minister Datuk Seri Fadillah Yusof, who is also the Energy Transition and Water Transformation Minister, was quoted saying in the report that the study would evaluate waste management strategies to ensure nuclear energy is ­integrated responsibly into the country’s energy ecosystem.

Shifting to new battery tech

From energy generation to energy storage, 2026 may see the beginnings of a mass ­transition from lithium-ion to sodium-ion batteries being used in the tech space.

This is already starting to be seen in the ­electric vehicle (EV) space, with the sale of ­sodium-ion battery-­equipped ­scooters in China, each priced between US$400 to US$660 (RM1,622 to RM2,676), according to a BBC report from last June.

Impressively enough, these scooters are capable of being charged from 0% to 80% in just 15 minutes, which highlights another perk of sodium-ion ­batteries: the potential to charge faster than standard lithium-ion ones.

The BBC report also points to safety as another factor in the push towards adopting the new battery technology, with ­industry insiders indicating that sodium-­ion batteries may be less likely to overheat and combust. The issue had been a major ­public ­concern following a series of reports on battery fires in parked EVs across the globe.

Sodium-ion batteries also lose less capacity in sub-zero temperatures compared to their lithium-­ion counterpart, ­making them a good fit for deployment in harsh conditions.

Leading the charge this year is Chinese battery maker, Contemporary Amperex Technology Co Ltd (CATL), with its new Naxtra sodium-ion ­battery platform.

According to several online electric vehicle publications, CATL intends for a widespread deployment of its sodium battery.

This would come in the form of battery swap systems – where vehicles such as scooters can exchange a ­depleted battery for a fully charged one on the fly – as well as use in passenger and commercial vehicles, and energy storage amid a spike in lithium prices in recent months.

According to a 2024 report from physicist non-profit ­membership organisation, the American Physical Society (APS), sodium is about a thousand times more abundant than ­lithium, making it a highly attractive ­element for battery manufacturing to potentially lower costs as production ramps up.

Of course, this does not come without downsides, with a major one being a significantly lower energy density. This means that sodium-ion batteries are capable of ­storing less ­energy compared to lithium-ion, however the gap has reportedly been shrinking year by year.

Tumultuous AI

On the other hand, there are the waves of new developments in the AI space, which have spent the last few years under fire for their high usage of energy and water in cooling data centres.

While AI has shown promise in making coding more ­accessible than ever through the wider availability of more ­general chatbot-based tools like ChatGPT, Copilot and Claude, along with more specialised ones like Replit, Cursor and Lovable, it’s not all sunshine and rainbows.

Companies and individuals alike have embraced the technology, with catchy buzzwords like “vibe coding” becoming commonplace alongside lofty claims by tech companies saying a significant portion of their code is now written by AI – supposedly up to 30% at Microsoft and over a quarter at Google, according to reports from CNBC, Business Insider and TechCrunch.

On the flip side, a July 2025 report from The Washington Post points to a potential decline in entry-level jobs due to the proliferation of AI, which could have the knock-on effect of ­hindering the development of mid-level workers over time.

This would, in turn, force both employers and educators to reconsider talent pipelines and critical skillsets for future workforces.

In a StarLifestyle report from March 2025, Daren Tan – ­founder of local IT community Developer Kaki – put it more succinctly: AI is impressive at generating ­boilerplate code and solving standard problems, but has blind spots and falls short by producing subtle bugs, logic errors or even security vulnerabilities.

This is when a human touch is required to iron out kinks, such as bugs and security ­shortcomings. Not to mention dealing with the “technical debt”, essentially the difficulty in maintaining and adapting AI-generated code over time.

As highlighted by online ­technology publication, The Register, AI-generated code is also 1.7 times more likely to ­contain defects than code ­written by humans. These ­mistakes are also likely to be more severe than those made by humans.

Trend Micro’s head of threat awareness Dustin Childs even wrote in a security update review in December 2025 that he expects a continued rise in ­common vulnerabilities and exposures in Microsoft services due to the prevalence of AI bugs.

On a positive note

However, things aren’t all bad on the AI side of the tech world, as researchers take steps towards getting a better ­understanding of what’s going on under the hood of large ­language models (LLMs) and uncover their inner workings, something that even AI builders don’t fully comprehend.

For instance, AI startup Anthropic published a paper in March 2025 on how it is building a figurative “AI microscope” that would give insight into how the LLM solves problems and “chooses” words in responses, and even explains some cases of hallucinations.

This is known as mechanistic interpretability and involves tracing the “thought process” of an LLM when given a prompt, going through each reasoning step to gain an understanding of how a response is formed.

The example outlined by Anthropic in its report is when its chatbot Claude encounters a prompt asking “Which sport does Michael Jordan play?”, which is an entity it knows about, then triggering a “known answer” feature to provide the accurate answer of “basketball”.

On the other hand, when an unknown name is input, Claude would then respond by saying it doesn’t know by default since it does not recognise the entity.

Anthropic goes on to say that a hallucination happens when the LLM falsely recognises an entity and suppresses the default “don’t know” feature, forcing it to respond by generating a plausible, but wrong response.

The “AI microscope” was also used to provide insight into how “jailbreaking” – the practice of circumventing built-in LLM guardrails through prompting – actually works.

This insight can then be used to better understand what these AI models are capable of and make sure they function as intended.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Sleep-tracking devices have limits. Experts want users to know what they are
Samsung Elec to start production of HBM4 chips next month for Nvidia supply, source says
Netflix wants to show you more TikTok-style vertical clips in its app
'Glaring errors': You still can't trust any AI answer, research shows
Opinion: Are you really unsubscribed?
Video games 2026: Five projects that we’re excited to play
SEC agrees to dismiss case over crypto lending by Winklevoss' Gemini
Apple issues urgent warning, millions of iPhone users at risk; update ASAP
Microsoft CEO warns AI needs to spread beyond Big Tech to avoid bubble
The ChatGPT app is getting ads, but Google's Gemini app isn't for now

Others Also Read