
The world is in a race to deploy AI, but a leading voice in technology ethics warns prioritising speed over safety risks a “trust crisis.”
Suvianna Grecu, Founder of the AI for Change Foundation, argues that without immediate and strong governance, we are on a path to “automating harm at scale.”
Speaking on the integration of AI into critical sectors, Grecu believes that the most pressing ethical danger isn’t the technology itself, but the lack of structure surrounding its rollout.
Powerful systems are increasingly making life-altering decisions about everything from job applications and credit scores to healthcare and criminal justice, often without sufficient testing for bias or consideration of their long-term societal impact.
For many organisations, AI ethics remains a document of lofty principles rather than a daily operational reality. Grecu insists that genuine accountability only begins when someone is made truly responsible for the outcomes. The gap between intention and implementation is where the real risk lies.
Grecu’s foundation champions a shift from abstract ideas to concrete action. This involves embedding ethical considerations directly into development workflows through practical tools like design checklists, mandatory pre-deployment risk assessments, and cross-functional review boards that bring legal, technical, and policy teams together.
According to Grecu, the key is establishing clear ownership at every stage, building transparent and repeatable processes just as you would for any other core business function. This practical approach seeks to advance ethical AI, transforming it from a philosophical debate into a set of manageable, everyday tasks.
Partnering to build AI trust and mitigate risks
When it comes to enforcement, Grecu is clear that the responsibility can’t fall solely on government or industry. “It’s not either-or, it has to be both,” she states, advocating for a collaborative model.
In this partnership, governments must set the legal boundaries and minimum standards, particularly where fundamental human rights are at stake. Regulation provides the essential floor. However, industry possesses the agility and technical talent to innovate beyond mere compliance.
Companies are best positioned to create advanced auditing tools, pioneer new safeguards, and push the boundaries of what responsible technology can achieve.
Leaving governance entirely to regulators risks stifling the very innovation we need, while leaving it to corporations alone invites abuse. “Collaboration is the only sustainable route forward,” Grecu asserts.
Promoting a value-driven future
Looking beyond the immediate challenges, Grecu is concerned about more subtle, long-term risks that are receiving insufficient attention, namely emotional manipulation and the urgent need for value-driven technology.
As AI systems become more adept at persuading and influencing human emotion, she cautions that we are unprepared for the implications this has for personal autonomy.
A core tenet of her work is the idea that technology is not neutral. “AI won’t be driven by values, unless we intentionally build them in,” she warns. It’s a common misconception that AI simply reflects the world as it is. In reality, it reflects the data we feed it, the objectives we assign it, and the outcomes we reward.
Without deliberate intervention, AI will invariably optimise for metrics like efficiency, scale, and profit, not for abstract ideals like justice, dignity, or democracy, and that will naturally impact societal trust. This is why a conscious and proactive effort is needed to decide what values we want our technology to promote.
For Europe, this presents a critical opportunity. “If we want AI to serve humans (not just markets) we need to protect and embed European values like human rights, transparency, sustainability, inclusion and fairness at every layer: policy, design, and deployment,” Grecu explains.
This isn’t about halting progress. As she concludes, it’s about taking control of the narrative and actively “shaping it before it shapes us.”
Through her foundation’s work – including public workshops and during the upcoming AI & Big Data Expo Europe, where Grecu is a chairperson on day two of the event – she is building a coalition to guide the evolution of AI, and boost trust by keeping humanity at its very centre.
(Photo by Cash Macanaya)
See also: AI obsession is costing us our human skills

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Be the first to comment