Hong Kong PolyU’s top AI expert Yang Hongxia eyes ‘last mile of generative AI’


Chinese artificial intelligence scientist Yang Hongxia, a professor at Hong Kong Polytechnic University (PolyU), is seeking to democratise large language models (LLMs) by empowering hospitals and various enterprises to train their own AI systems.

Yang, who previously worked on AI models at ByteDance and Alibaba Group Holding’s Damo Academy, said in a recent interview with the South China Morning Post that her newly formed start-up, InfiX.ai, envisioned a world in which various businesses could train their own “domain-specific” LLMs, which would complement commercially available AI models from Big Tech firms and start-up developers. Alibaba owns the Post.

According to InfinX.ai’s landing page on developer platforms GitHub and Hugging Face, the start-up’s research would “eventually lead to decentralised generative AI – a future where everyone can access, contribute to and benefit from AI equally”.

“Over the next five years, I expect consumers as well as enterprises, particularly small and medium-sized enterprises, to have their own domain-specific models,” said Yang, who serves as the university’s associate dean at the Faculty of Computing and Mathematical Sciences, as well as the executive director at the PolyU Academy for AI.

She said InfiX.ai, which had a US$250 million valuation after its initial funding round, had a mission to build “the last mile of generative AI”, making AI applications accessible to everyone.

That echoed the vision of Thinking Machines Lab, a start-up founded by former OpenAI chief technology officer Mira Murati. This AI research and product unicorn – reportedly in talks for a new funding round that would value the firm at about US$50 billion – said it was focused on “building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals”.

Among its various endeavours, InfiX.ai developed methods to create highly capable AI systems that required minimal computational resources, “making advanced AI accessible to organisations of all sizes through techniques like FP8 precision training, edge AI deployment and privacy-preserving solutions”, according to the company.

Yang Hongxia serves as the associate dean at the Faculty of Computing and Mathematical Sciences of Hong Kong Polytechnic University. Photo: Handout.

The similar goals of Yang and Murati reflect efforts in the AI industry to broaden the technology’s adoption while expanding the scope of innovations through the most cost-effective means for enterprises to accomplish.

While a number of Big Tech firms and AI unicorns – start-ups valued over US$1 billion – have made generative AI breakthroughs, Yang said InfiX.ai aimed to enable various institutions, with private troves of data from their industries, to develop their own domain-specific models “with the minimum of computing resources”.

She said open-source models, such as those from DeepSeek, were trained without an industry’s specific domain data and therefore could only be deployed for “inference”, with widespread hallucinations – incorrect or misleading results.

Yang said the existing foundational LLMs made technical breakthroughs in mass problem solving, code generation and various general tasks, but these lacked the training to solve highly specific problems, for example, in healthcare such as cancer treatment. The pre-training of these models is often based on general data from the internet, without any specific context.

InfiX.ai provided continuous “pre-training” for LLMs by including specific industry knowledge and enterprise data, according to Yang.

A published author of many papers on LLMs, Yang said individuals and businesses would eventually have access to their own models that would parallel the steady proliferation of personal computers and smartphones. Meanwhile, the centralised development of foundational LLM technologies would develop akin to how supercomputers are deployed in national labs.

In the paper InfiMed-ORBIT: Aligning LLMs on Open-Ended Complex Tasks via Rubric-Based Incremental Training, Yang and her co-authors wrote that reinforcement learning in LLMs often failed in open-ended domains like medical consultation.

The development of generative AI, according to Yang, had entered the third stage of application, in which Chinese AI players could pursue further innovations. “China’s production performs better because we have a lot of consumers ... and that’s the truth,” Yang said.

In the first half of 2025, China saw a massive uptick in generative AI adoption to 515 million users, most of whom preferred domestic AI models, according to a report released last month by the China Internet Network Information Centre. - SOUTH CHINA MORNING POST

 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Aseanplus News

Thousand-year-old Khmer sculptures return home to Cambodia
Made in Vietnam Fair set to open in Hanoi
Retired cop escapes gallows, acquitted of drug charges
Indonesia's rice import ban lowered global prices: Minister
Bhutan pledges US$1bil cryptocurrency for 'mindfulness' city
S. Korean leader says livestreamed briefings may be better than Netflix
Harith and Sehveetrraa cruise to squash doubles gold
Another blow for FAM as FIFA hit them with more sanctions
Fire at Tokyo sauna kills two; police believe emergency button malfunctioned
Sabahan Jia Chi knocks out Thai world champ to win gold in muay thai

Others Also Read