Digital Mental Health Blogs

Blogs
Code of Conscience: Navigating AI Ethics
by Group Member: Bev Brown
Shaping AI that respects human values, avoids harm, and promotes fairness
Artificial Intelligence (AI), also referred to as machine learning models, is transforming how we live, work, and interact with each other. From diagnosing diseases to advertising, or shaping our social media feeds, AI systems are increasingly embedded into our daily lives. However, as these technologies evolve, so do the ethical questions they raise; questions that matter not only to researchers and developers but to anyone affected by their use.
This blog explores the ethical dimensions of AI, offering insights and considerations for the ethical development of AI models and systems
Understanding AI Ethics
AI ethics is the study of how artificial intelligence should be designed and used in ways that respect human values, societal norms and expectations. AI ethics is drawn from disciplines such as philosophy, law, computer science, and sociology, and primarily focuses on the following key principles:
- Fairness: the avoidance of discrimination and ensuring equitable outcomes
- Transparency: ensuring that making AI decisions are both understandable and explainable
- Accountability: determining who is responsible when AI systems cause harm
- Privacy: fully safeguarding personal data and respecting user consent
- Safety: the prevention of unintended consequences and misuse
Real-World Ethical Challenges
As we move through an ever-growing digital landscape it is becoming more apparent that AI is not necessarily the panacea it is often purported to be in the media. For example, there have been some significant real-world challenges that have come to light in recent years. For example, incidents of unethical use of AI have surfaced including where a world-famous company inadvertently used a now-retired AI recruiting tool. The tool was found to show a specific bias against female candidates only. The tool had penalised applications that included terms such as being a member of a ‘women’s chess club’. This case underscores how historical data can introduce societal biases into modern AI systems.
Another challenge relates to governmental misuse of AI systems. For example, in the Netherlands, an AI system used to detect childcare benefit fraud disproportionately flagged families who identified as having dual nationality. The resulting scandal led to financial hardship for thousands of families and a national debate on algorithmic accountability. This case further highlights how essential transparency and ethical oversight are for public sector AI applications.
A new and rising concern is the use of AI-generated deep-fake videos and political manipulation tools. These have been used to impersonate public figures for advertising purposes, or for spreading misinformation to undermine democracy. Within a research environment this raises important questions about the introduction of detection technologies and ensuring models are created with ethical boundaries built in. In regard the general population, it underlines the importance of media literacy and trust in information sources.
A lot of people are also uneasy about the growth of AI and the potential applications, it is not too hard to see why. There are real concerns that automation will bring about the loss of jobs and opportunity, around AI making biased or unfair decisions, or more importantly, how personal data might be being misused. The collection and use of medical data is a particular concern. In addition, AI often feels out of reach to most people, especially when it is hard to understand how or why decisions are made.
AI is not an exact science, and this can lead to errors being made. For instance, not many understand that AI uses a significant amount of data to learn and make decisions. If the data being accessed is limited, incorrect, or biased in any way it can lead to what is termed ‘hallucination’ in AI. This is as the name suggests, a fantasy or nonsense response based on incorrect assumptions. Any key problem is the media presentation of AI, media outlets tend to highlight the scariest or most extreme examples. This can have a significant impact on those who are the least familiar with how AI works; indeed, a lack of digital know-how can make it all feel even more intimidating.
AI is also something that has a significant impact on our environment, which runs counter to the green initiatives we are aware of today. For example, training large AI models consumes a significant amount of energy, a single AI model can emit as much carbon output as five cars over their lifetimes. This is a staggering amount of carbon emissions. This has prompted researchers to explore more efficient algorithms and sustainable computing practices, while simultaneously raising public awareness about the environmental cost of the ever-growing digital landscape.
Why does all this matter?
For researchers, AI ethics is a rigorous field of study that informs technical design, policy development, and interdisciplinary collaboration. It challenges developers to think beyond performance metrics and consider the societal impact of their work.
For the public, ethical AI means understanding how these systems affect our rights, opportunities, and daily experiences. Whether you’re applying for a job, using a healthcare app, or interacting with customer service chatbots, AI decisions can shape outcomes in subtle yet significant ways.
Therefore ethical awareness empowers everyone to ask these 3 critical questions:
- Who has designed this system?
- What data was used to create the model?
- Can I challenge its decision making?
It is clear that ethical considerations have become more important than ever, especially when developing and deploying new AI systems into the public domain.
Building ethical AI is a shared responsibility
Ethical AI isn’t just a technical challenge; it is a collaborative effort and shared responsibility. Ways different stakeholders can contribute to this effort are as follows:
Researchers should develop transparent, fair, and accountable algorithms and publish findings that inform public debate
Governments should establish clear regulations and oversight mechanisms while simultaneously promoting ethical standards in procurement and deployment
Companies should adopt responsible AI practices and engage with diverse communities to understand real-world impacts
Educators should teach AI literacy and ethics across all disciplines
The public should stay informed, ask questions, and advocate for responsible technology
Conclusion: Shaping a Just and Inclusive AI Future
While AI offers immense promise it isn’t without issues. Without ethical governance, it has the potential to amplify inequalities, erode privacy, and destabilise institutions. By engaging with these issues thoughtfully, we can shape a future where AI enhances human experience, dignity, and justice.
Whether you’re a researcher developing new models or a member of the public navigating AI-powered services, the ethics of AI is a shared responsibility. Let’s ensure that the future of AI is not only intelligent, but also just, inclusive, and human centred.
Transforming Health and Care Beyond the Hospital
by Group Member: Bob Laramee
A Case 糖心原创 Unveiling Hidden Patterns
Meeting the Grand Challenge: Led by the 糖心原创, the ‘Bringing Healthcare Data to Life’ project develops novel data visualisation tools that enable users to pinpoint patterns, hotspots, trends and anomalies for all kinds of diseases and disorders including mental health. One key benefit is the extra dimension these tools deliver in terms of informing measures aimed at preventing illnesses and aiding their management at home – options crucial to easing pressure on hospitals and meeting the needs of the UK’s ageing population.
Vision and Value: From Electronic Health Records to information held by healthcare providers and public health agencies across the country, the UK’s reservoir of medical data is vast and fast-growing. The challenge is to turn this huge resource into valuable insights that help combat conditions such as depression, cancer, coronary heart disease, hypertension, stroke and asthma. Not only are the datasets extremely diverse, complex and variable in quality; their sheer size is also an issue when trying to develop sound platforms to support decisions that can make a material difference to disease prediction, prevention, diagnosis and treatment.
The Nottingham team’s solution is rooted in a proven fact: visual images can offer the best way of ensuring the brain accesses information quickly and indelibly. But traditional line, bar and pie charts have their limitations. The team’s vision, then, is to develop new data visualisation software tools that, for any UK-based collection of raw public healthcare data, enable users not just to gain rapid, accurate overviews but also to drill down easily into the detail. Combining sophistication with ready comprehensibility, these tools….
Key Components: Supported by £358,000 of EPSRC funding, the four-year project is dovetailing computer science with data analytics to tackle this classic ‘big data’ challenge. Input from partners in the digital health sector will ensure outputs achieve the primary aim of meeting the real-world requirements of data analysts in the healthcare space.
In terms of specific capabilities, the goal is to build data-mining software that generates easy-to-understand graphics showing the prevalence and geographical spread of individual medical conditions, and uses state-of-the-art visualisation techniques to reveal how the prevalence and spread of these conditions has changed over the course of time. Filtering and selection options will allow users to focus on subsets of particular interest, such as disease incidence among specific age groups.