Search published articles


Showing 5 results for Artificial Intelligence

Masoud Amanzadeh, Mahnaz Hamedan,
Volume 17, Issue 0 (12-2024)
Abstract

Health chatbots, powered by artificial intelligence (AI), are revolutionizing healthcare by providing accessible, personalized, and efficient health-related assistance. These tools have found applications in symptom checking, mental health support, and even aiding in clinical decision-making. While their potential to enhance healthcare efficiency is significant, the use of medical chatbots raises significant ethical considerations that must be considered and addressed. The aim of this study is to investigate the ethical challenges and considerations of health chatbots. In this article, we reviewed the literature on the ethical considerations of health chatbots. PubMed, Scopus, Web of Science, and Google Scholar were searched using related keywords such as "Chatbot," "conversational agent," "ethics," "medical," and "healthcare." Relevant studies were selected and reviewed based on specified inclusion/exclusion criteria. The review identified several ethical concerns associated with health chatbots: 1) Privacy and Data Security: Patient data collected by chatbots are vulnerable to breaches, raising concerns about confidentiality and misuse. 2) Accuracy and Reliability: Errors in chatbot responses can lead to misdiagnoses or inappropriate advice, potentially harming patients. 3) Bias and Equity: AI algorithms may perpetuate biases present in training datasets, leading to unequal care for certain demographic groups. 4) Accountability and Responsibility: Unclear legal frameworks complicate the allocation of responsibility in cases of harm. 5) Autonomy and Trust: Overreliance on chatbots may diminish the human element of care, affecting trust and patient autonomy in decision-making. While health chatbots offer substantial benefits in accessibility and efficiency, addressing their ethical challenges is imperative. A robust ethical framework emphasizing privacy, transparency, fairness, and accountability is needed to mitigate risks. Continuous monitoring, user education, and adherence to evolving AI regulations can ensure safe and equitable integration of chatbots in healthcare.

Mohammad Shojaeinia ,
Volume 17, Issue 0 (12-2024)
Abstract

Artificial Intelligence (AI) represents a transformative and innovative approach in healthcare with the potential to revolutionize diagnostic, therapeutic, administrative, educational, research, and managerial processes. Given that AI systems influence reasoning, decision-making, and the delivery of care, their implementation faces challenges, particularly ethical considerations rooted in the unique nature of the healthcare system—where patient welfare, trust, and the autonomy of healthcare providers hold paramount importance. This study adopts a qualitative approach. Various information sources, including journals, articles, and publications, were reviewed. The applications of AI in clinical environments and its impact on individuals' interactions with healthcare systems, decision-making processes, and clinical workflows were analyzed, and relevant ethical considerations were extracted. The results indicate that the integration of AI in healthcare, despite its extensive benefits in prevention, diagnosis, treatment, prediction, decision-making, process automation, medication and therapeutic recommendations, surgical guidance, personalized medicine, telemedicine systems, and numerous other applications, is accompanied by a set of ethical considerations. Addressing these considerations is crucial to ensure the responsible and equitable use of these technologies. These include concerns related to patient privacy and data security, biases in AI systems, transparency, explainability, interpretability, accountability, informed consent, impacts on the relationships between healthcare providers and patients, equitable access to AI benefits, the appropriate and judicious use of technology, ethical use of automation, preservation of human dignity, effective oversight and regulation, legal and legislative issues, and long-term implications such as preventing misuse of predictive data by insurers or employers, among other patient rights-related issues. The utilization of AI in healthcare necessitates the development of ethical and legal frameworks that balance technological innovation with the humanistic principles underpinning healthcare systems. This ensures that while leveraging the advantages of AI, privacy, justice, equity, and human dignity are safeguarded. Emphasis on continuous monitoring and aligning AI-based systems with human values can foster trust in these technologies, ensuring that AI is used responsibly and adheres to ethical standards, ultimately serving to enhance public health outcomes responsibly and equitably.

Reza Salehinia, Marzieh Nasiri Sangari, Hossein Abbasian, Sajjad Salehian,
Volume 17, Issue 0 (12-2024)
Abstract

Artificial intelligence (AI) represents a significant human advancement. The proliferation of AI technologies within the healthcare sector has led to substantial improvements in health outcomes and medical indicators. However, the application of AI in healthcare is accompanied by numerous ethical challenges. This study aimed to investigate the ethical considerations associated with the use of AI in the healthcare domain. This narrative review included articles published between February 2019 and November 2024. A comprehensive literature search was conducted across internal databases, including Magiran and SID, as well as external scientific databases such as PubMed, Web of Science, Medline, ScienceDirect, and Google Scholar. Keywords used for the search included "Ethics," "Artificial Intelligence," and "Health" in both Persian and English. After applying inclusion criteria and conducting quality assessments, nine studies were deemed eligible for inclusion in this review. The findings of previous studies demonstrate that the utilization of AI in healthcare has yielded significant benefits, including more accurate disease diagnoses, improved clinical predictions, more efficient hospital management, optimized resource allocation, enhanced patient care, streamlined clinical workflows, and advancements in medical research. These technologies have contributed to increased efficiency and quality within healthcare services. However, significant ethical challenges remain, including data privacy and security concerns, algorithmic bias, transparency issues, the need for robust clinical validation, and the importance of ensuring professional responsibility. Adherence to principles such as transparency, fairness, privacy protection, and equitable access is crucial for the responsible development and deployment of AI in healthcare. Ultimately, achieving a balance between technological advancements and human values is paramount for the sustainable and ethical utilization of AI in this domain. The findings of this review underscore the profound impact of AI on improving quality of life and enhancing services across various sectors, particularly healthcare, by providing innovative solutions. However, the optimal utilization of AI in healthcare necessitates a meticulous consideration of ethical implications, rigorous monitoring of AI systems, and proactive efforts to address the existing challenges.

Nafiseh Rezaei, Rasha Atlasi,
Volume 17, Issue 0 (12-2024)
Abstract

Artificial intelligence (AI) ethics encompasses principles and standards guiding the design and application of AI, ensuring privacy, security, and fairness. This study aims to conduct a scientometric analysis of research in this field, identifying key features and emerging trends. A search was conducted in the PubMed database using the Medical Subject Headings (MeSH) terms "artificial intelligence" and "ethics." All indexed documents from inception to September 1, 2024, were retrieved and analyzed. Scientometric analysis and data visualization were performed using R, with results presented through tables, graphs, and scientific maps. A total of 534 papers were published in this domain from 1986 to 2024, with the highest number (n=70) in 2024. The American Journal of Bioethics had the most publications (n=30), and Melissa D. McCradden (University of Toronto) was the most prolific author with five articles. The University of Oxford (n=24) and Stanford University School of Medicine (n=21) were the leading institutions in this field. The most active countries were the USA (n=236), Germany (n=91), and France (n=52). In 2024, the top trending topics included "research personnel," "informed consent/ethics," and "artificial intelligence/ethics/trends," while in 2023, "privacy," "biomedical research," and "medical education" were predominant. The field of AI ethics has seen exponential growth in scientific output, paralleling the rapid advancements in AI applications across disciplines and daily life. Addressing ethical concerns and fostering international research collaboration will be essential for maximizing benefits while mitigating challenges in this evolving domain.
Amirmohammad Azarakhsh, Mohammadreza Dinmohammadi, Kian Nouroozi Tabrizi, Kowsar Nouri,
Volume 17, Issue 0 (12-2024)
Abstract

In recent years, artificial intelligence (AI) has significantly impacted the publication of research articles, transforming the landscape of academic writing and dissemination. However, the integration of AI in this process presents significant ethical challenges that require careful consideration. This review study utilized a comprehensive search strategy, employing keywords such as "artificial intelligence," "publication ethics," "ethical challenges," "academic integrity," and "research dissemination" to identify relevant articles in scientific databases including PubMed, Scopus, CINAHL, and Google Scholar. The search included articles published between 2010 and 2024 in both English and Persian. Research articles, systematic reviews, and case reports that included the specified keywords in their titles and abstracts were selected. A total of 150 articles were screened, and 50 relevant studies were included for detailed analysis. The analysis identified several ethical challenges associated with the use of AI in academic publishing. Concerns regarding academic integrity are paramount, as AI-generated content can blur the lines between original research and automated writing, raising concerns about authorship and plagiarism. Furthermore, the reliance on AI tools for data analysis and manuscript preparation can raise questions about the accuracy and validity of research findings. additionally, the potential for bias embedded within AI algorithms is a significant concern, as it can influence the selection of research topics, the framing of research questions, and even the peer review process. The lack of transparency in AI-driven editorial processes can further undermine trust in academic publishing. This review underscores the urgent need for robust ethical frameworks and regulations to guide the responsible use of AI in academic publishing. Increased awareness and training among researchers and editors regarding the ethical implications of AI are crucial. Interdisciplinary collaborations are essential to address these challenges effectively and ensure the integrity and trustworthiness of academic research in the AI era.
 


Page 1 from 1     

© 2026 , Tehran University of Medical Sciences, CC BY-NC 4.0

Designed & Developed by: Yektaweb