La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
dc.contributor.advisor | Escobar Triana, Jaime | |
dc.contributor.author | Riaño Moreno, Julián Camilo | |
dc.date.accessioned | 2024-08-06T19:49:28Z | |
dc.date.available | 2024-08-06T19:49:28Z | |
dc.date.issued | 2024-07 | |
dc.description.abstract | La inteligencia artificial (IA), al ser una tecnología autónoma, evolutiva y altamente eficiente, promete automatizar y optimizar procesos en diversos sectores, incluyendo el político, económico, empresarial y de la salud. Esta promesa conlleva una potencial transformación en múltiples actividades humanas, incluyendo la práctica médica, y en particular, la relación médico-paciente (RMP). La inserción de la IA en el ámbito médico coloca tanto al médico como al paciente en el núcleo de un complejo industrial y digital imponente. El médico adopta el rol de intermediario de información, mientras que el paciente actúa como consumidor, gestor y productor en el entorno digital. En este contexto, la IA prioriza valores vinculados con su industria, como la eficiencia y precisión, así como la privatización y rendimiento económico, en lugar de valores intrínsecos a la RMP, tales como la honestidad, confianza y confidencialidad. Esta dinámica engendra nuevas formas de vulnerabilidad, que requieren un análisis cuidadoso desde una perspectiva que integre la relación entre vulnerabilidad y tecnología. Al analizar las interacciones entre vulnerabilidad e IA, se observa que las perspectivas comunes sobre vulnerabilidad suelen ser técnicas y centradas en la ingeniería y el diseño, enfocándose en la identificación y minimización de amenazas y riesgos. Aunque este enfoque es esencial, es notable su adopción similar en la medicina, lo que indica una percepción instrumental de la IA en este campo y una urgencia por su implementación en las ciencias de la salud, acorde con un tecno-solucionismo médico. Esto también señala un alineamiento de la medicina con los valores de los actores e industrias del mundo digital. Por otro lado, los enfoques tradicionales de vulnerabilidad en la atención sanitaria se centran en el cuidado y protección de las personas, considerando la vulnerabilidad como una característica inherente a individuos cuya autonomía está comprometida o como un adjetivo aplicable a grupos que no pueden protegerse a sí mismos o son dependientes. Esto conlleva a que, en la RMP, generalmente solo los pacientes sean vistos como vulnerables, ignorando la vulnerabilidad inherente tanto al médico como al paciente. Esta limitación en los entendimientos de vulnerabilidad, restringidos a contextos de atención sanitaria o ingenierías, es problemática porque ignora la complejidad de la vulnerabilidad en la RMP, reduciéndola a un asunto meramente técnico o instrumental y no ético. Por ende, en el marco de esta investigación, se hace necesario avanzar más allá de la perspectiva técnica de vulnerabilidad, integrándola, para orientarla hacia el ámbito bioético. Esto conduce a cuestionamientos sobre las relaciones y significados de vulnerabilidad y tecnología en los análisis bioéticos, y si estos son adecuados para abordar las transformaciones en la RMP impulsadas por la IA. Se busca comprender cómo estos enfoques bioéticos pueden ofrecer respuestas a los desafíos planteados por la incorporación de la IA en la medicina, especialmente en lo que respecta a las dinámicas de vulnerabilidad entre médicos y pacientes. Para responder a estas cuestiones, se realizó un análisis de las fundamentaciones más recientes sobre vulnerabilidad desde tres perspectivas éticas diferentes: la bioética global de Henk ten Have, la ética de la caricia de Corine Pelluchon y la ética de la tecnología de Mark Coeckelbergh. Este análisis se llevó a cabo mediante una metodología hermenéutica, utilizando un análisis semántico por CAQDAS y un ejercicio dialéctico-interpretativo siguiendo la “hermenéutica de la distancia” de Paul Ricoeur. La anticipación de sentido central de este trabajo fue, que los que los significados de vulnerabilidad dependen de los significados de tecnología de cada autor, un aspecto frecuentemente pasado por alto en el discurso tradicional de la bioética. Así, comprender las transformaciones de la RMP provocadas por la IA implica repensar los significados de vulnerabilidad a partir de la tecnología, considerando nuevos tipos de actores y relaciones, como la interacción hombre-máquina. La metodología y el desarrollo de la anticipación se detallan en el primer capítulo de la tesis. Este capítulo establece tres objetivos específicos que orientan el propósito central del estudio: comprender las transformaciones en la RMP provocadas por la IA, analizando las relaciones y significados de vulnerabilidad y tecnología según Henk Ten Have, Corine Pelluchon y Mark Coeckelbergh. Estos objetivos se desarrollan en los capítulos dos al cuatro. El primer objetivo, abordado en el segundo capítulo, es describir las transformaciones de la RMP impulsadas por la IA entre 2010 y 2021. El segundo objetivo, tratado en el tercer capítulo, es explicar las convergencias y divergencias entre los significados de vulnerabilidad y tecnología en las reflexiones de los autores mencionados. El tercer objetivo, expuesto en el cuarto capítulo, es proponer un modelo de vulnerabilidad que permita entender la RMP en el contexto de la IA, basándose en los significados de vulnerabilidad y tecnología de estos autores. Estos capítulos forman una secuencia que describe la situación actual (capítulo dos), proporciona un marco analítico (capítulo tres) y presenta una propuesta (capítulo cuatro). El análisis secuencial realizado en este trabajo culmina en la propuesta de una ética de la vulnerabilidad fundamentada en la idea de "vulnerabilidad antropoténica". Esta concepción se nutre de diversas teorías y filosofías, entre ellas la posfenomenología, la teoría de la interacción, la teoría de los sistemas sociotécnicos (STS), la antropotécnica de Sloterdijk, y la imaginación moral. Este enfoque proporciona una perspectiva novedosa para abordar las transformaciones observadas en la RMP en el contexto de la IA, teniendo en cuenta no solo los actores tradicionales de la RMP sino también, otros actores humanos como desarrolladores y diseñadores, también, actores no humanos como los mismos dispositivos tecnológicos, y meta actores, como la industria digital, entre otras. Para abarcar la complejidad de la RMP en la era de la IA, se propone el "modelo STS de la IMP/RMP", como un marco analítico que permite la exploración detallada de estas transformaciones y de la vulnerabilidad emergente. Este modelo, al integrar aspectos sociotécnicos y reconoce la tecnología como “sistema”, ofrece una comprensión más profunda de cómo la interacción médico-paciente (IMP) se ve influenciada y modificada por diferentes actores y formas de interacción que emergen a causa de la IA. De esta manera, la propuesta de “vulnerabilidad antropoténica”, como concepto central desarrollado en este trabajo, sugiere que la vulnerabilidad en el contexto de la RMP no es estática ni unidimensional, sino que es dinámica y se ve afectada por múltiples factores tecnológicos y humanos. Esta perspectiva amplía el entendimiento de la vulnerabilidad más allá de una simple categorización de individuos o grupos como "vulnerables", hacia una comprensión más integrada de cómo la tecnología y la interacción humana colectiva conforman y redefinen la vulnerabilidad en el ámbito de la salud. Este trabajo representa uno de los primeros acercamientos en Latinoamérica para proponer elementos fundacionales para una bioética de la IA, superando los enfoques tradicionales de las éticas de la IA, los humanismos de la IA, y las dimensiones reduccionistas de las éticas prácticas y su modelo ingenieril. Ofrece un marco epistemológico y metodológico para analizar la vulnerabilidad, una categoría generalmente desatendida en la ética de la IA y comúnmente abordada desde una perspectiva de ingeniería que limita la reflexión sobre la vulnerabilidad. Así, este trabajo se posiciona como una contribución pionera a la ética biomédica y la bioética en la era digital. | |
dc.description.abstractenglish | Artificial Intelligence (AI), as an autonomous, evolving, and highly efficient technology, promises to automate and optimize processes in various sectors, including politics, economics, business, and healthcare. This promise entails a potential transformation in multiple human activities, including medical practice, and particularly, the doctor-patient relationship (DPR). The insertion of AI in the medical field places both the doctor and the patient at the core of an imposing industrial and digital complex. The doctor adopts the role of information intermediary, while the patient acts as a consumer, manager, and producer in the digital environment. In this context, AI prioritizes values linked to its industry, such as efficiency and precision, as well as privatization and economic performance, instead of values intrinsic to the DPR, such as honesty, trust, and confidentiality. This dynamic engenders new forms of vulnerability, which require careful analysis from a perspective that integrates the relationship between vulnerability and technology. When analyzing the interactions between vulnerability and AI, it is observed that common perspectives on vulnerability tend to be technical and centered on engineering and design, focusing on identifying and minimizing threats and risks. Although this approach is essential, its similar adoption in medicine is notable, indicating an instrumental perception of AI in this field and an urgency for its implementation in health sciences, in line with medical techno-solutionism. This also signals an alignment of medicine with the values of actors and industries in the digital world. On the other hand, traditional approaches to vulnerability in healthcare focus on the care and protection of people, considering vulnerability as an inherent characteristic of individuals whose autonomy is compromised or as an adjective applicable to groups that cannot protect themselves or are dependent. This leads to, in the DPR, generally only patients being seen as vulnerable, ignoring the inherent vulnerability of both the doctor and the patient. This limitation in the understandings of vulnerability, restricted to healthcare contexts or engineering, is problematic because it ignores the complexity of vulnerability in the DPR, reducing it to a merely technical or instrumental issue and not an ethical one. Therefore, within the framework of this research, it becomes necessary to move beyond the technical perspective of vulnerability, integrating it, to orient it towards the bioethical realm. This leads to questions about the relationships and meanings of vulnerability and technology in bioethical analyses, and whether these are adequate to address the transformations in the DPR driven by AI. It seeks to understand how these bioethical approaches can offer responses to the challenges posed by the incorporation of AI in medicine, especially regarding the dynamics of vulnerability between doctors and patients. To answer these questions, an analysis of the most recent foundations on vulnerability was carried out from three different ethical perspectives: Henk ten Have's global bioethics, Corine Pelluchon's ethics of caress, and Mark Coeckelbergh's ethics of technology. This analysis was conducted using a hermeneutic methodology, utilizing a semantic analysis by CAQDAS and a dialectical interpretative exercise following Paul Ricoeur's "hermeneutics of distance." The central anticipation of meaning in this work was that the meanings of vulnerability depend on the meanings of technology of each author, an aspect often overlooked in traditional bioethical discourse. Thus, understanding the transformations of the DPR caused by AI implies rethinking the meanings of vulnerability based on technology, considering new types of actors and relationships, such as human-machine interaction. The methodology and development of the anticipation are detailed in the first chapter of the thesis. This chapter establishes three specific objectives that guide the central purpose of the study: to understand the transformations in the DPR caused by AI, analyzing the relationships and meanings of vulnerability and technology according to Henk Ten Have, Corine Pelluchon, and Mark Coeckelbergh. These objectives are developed in chapters two through four. The first objective, addressed in the second chapter, is to describe the transformations of the DPR driven by AI between 2010 and 2021. The second objective, dealt with in the third chapter, is to explain the convergences and divergences between the meanings of vulnerability and technology in the reflections of the mentioned authors. The third objective, presented in the fourth chapter, is to propose a model of vulnerability that allows understanding the DPR in the context of AI, based on the meanings of vulnerability and technology of these authors. These chapters form a sequence that describes the current situation (chapter two), provides an analytical framework (chapter three), and presents a proposal (chapter four). The sequential analysis carried out in this work culminates in the proposal of an ethics of vulnerability based on the idea of "anthropotenic vulnerability." This conception draws from various theories and philosophies, including post-phenomenology, interaction theory, sociotechnical systems theory (STS), Sloterdijk's anthropotechnics, and moral imagination. This approach provides a novel perspective to address the observed transformations in the DPR in the context of AI, considering not only the traditional actors of the DPR but also other human actors such as developers and designers, as well as non-human actors such as the technological devices themselves, and meta-actors, such as the digital industry, among others. To encompass the complexity of the DPR in the AI era, the "STS model of DPI/DPR" is proposed as an analytical framework that allows for detailed exploration of these transformations and emerging vulnerability. This model, by integrating sociotechnical aspects and recognizing technology as a "system," offers a deeper understanding of how the doctor-patient interaction (DPI) is influenced and modified by different actors and forms of interaction that emerge due to AI. In this way, the proposal of "anthropotenic vulnerability," as the central concept developed in this work, suggests that vulnerability in the context of the DPR is neither static nor one-dimensional, but dynamic and affected by multiple technological and human factors. This perspective broadens the understanding of vulnerability beyond a simple categorization of individuals or groups as "vulnerable," towards a more integrated understanding of how technology and collective human interaction shape and redefine vulnerability in the health field. This work represents one of the first approaches in Latin America to propose foundational elements for bioethics of AI, surpassing traditional approaches of AI ethics, AI humanisms, and reductionist dimensions of practical ethics and their engineering model. It offers an epistemological and methodological framework to analyze vulnerability, a category generally neglected in AI ethics and commonly approached from an engineering perspective that limits reflection on vulnerability. Thus, this work positions itself as a pioneering contribution to biomedical ethics and bioethics in the digital era. | |
dc.description.degreelevel | Doctorado | spa |
dc.description.degreename | Doctor en Bioética | spa |
dc.format.mimetype | application/pdf | |
dc.identifier.instname | instname:Universidad El Bosque | spa |
dc.identifier.reponame | reponame:Repositorio Institucional Universidad El Bosque | spa |
dc.identifier.repourl | repourl:https://repositorio.unbosque.edu.co | |
dc.identifier.uri | https://hdl.handle.net/20.500.12495/12842 | |
dc.language.iso | es | |
dc.publisher.faculty | Departamento de Bioética | spa |
dc.publisher.grantor | Universidad El Bosque | spa |
dc.publisher.program | Doctorado en Bioética | spa |
dc.relation.references | Abate, T. (2020). Smarter Hospitals: How AI-Enabled Sensors Could Save Lives. Obtenido de https://hai.stanford.edu/news/smarter-hospitals-how-ai-enabled-sensors-could-save-lives | |
dc.relation.references | Abdalla, M., & Abdalla, M. (30 de 07 de 2021). The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity. Association for Computing Machinery. doi:10.1145/3461702.3462563 | |
dc.relation.references | Adela Martin, D., Conlon , E., & Bowe , B. (2021). Multi-level Review of Engineering Ethics Education: Towards a Socio-technical Orientation of Engineering Education for Ethics. Sci Eng Ethics, 27(60). doi:https://doi.org/10.1007/s11948-021-00333-6 | |
dc.relation.references | Agamben, G. (2006). Homo sacer. El poder soberano y la nuda vida. Valencia: Pre-Textos. | |
dc.relation.references | Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7(e7702). doi:https://doi.org/10.7717/peerj.7702 | |
dc.relation.references | Akbari, A. (2021). Authoritarian Surveillance: A Corona Test. Surveillance & Society, 19(1), 98-103. doi:doi:10.24908/ss.v19i1.14545 | |
dc.relation.references | Albrieu, R. R. (2018). Inteligencia artificial y crecimiento económico. Oportunidades y desafíos para Colombia. CIPPEC. Obtenido de chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://news.microsoft.com/wp content/uploads/prod/sites/41/2018/11/IA-y-Crecimiento-COLOMBIA.pdf | |
dc.relation.references | Altman, I., & Dalmas A, T. (1973). Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston. | |
dc.relation.references | Alvarellos, M., Sheppard, H., Knarston, I., Davison, C., Raine , N., Seeger, T., . . . Chatzou Dunford, M. (2022). Democratizing clinical-genomic data: How federated platforms can promote benefits sharing in genomics. Genet, 13(1045450). doi:https://doi.org/10.3389/fgene.2022.1045450 | |
dc.relation.references | Amazon. (23 de 02 de 2023). One Medical Joins Amazon to Make It Easier for People to Get and Stay Healthier. Obtenido de https://www.onemedical.com/mediacenter/one-medical joins-amazon/ | |
dc.relation.references | Aminololama-Shakeri, S., & López, J. E. (2019). The Doctor-Patient Relationship With Artificial Intelligence. AJR. American journal of roentgenology, 212(2), 308–310. doi: 10.2214/AJR.18.20509 | |
dc.relation.references | Andersen, T., Langstrup, H., & Lomborg , S. (2020). Experiences With Wearable Activity Data During Self-Care by Chronic Heart Patients: Qualitative Study. Journal of medical Internet research, 20(7), e15873. doi:doi: 10.2196/15873 | |
dc.relation.references | Arendt, H. (2016). La condición humana. Paidós. | |
dc.relation.references | Arisa, E., Nagakura, K., & Fujita, T. (2020). Proposal for Type Classification for Building Trust in Medical Artificial Intelligence Systems. AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 251-257. doi:https://doi.org/10.1145/3375627.3375846 | |
dc.relation.references | Aristóteles. (1973). Ética a Nicómaco. Sepan Cuantos. | |
dc.relation.references | Arthur, C. (23 de 08 de 2013). Tech giants may be huge, but nothing matches big data. Obtenido de The Guardian: https://www.theguardian.com/technology/2013/aug/23/tech-giants-data | |
dc.relation.references | Ballesteros de Valderrama, B. P. (2005). El concepto de significado desde el análisis del comportamiento y otras perspectivas. Universitas psychologica, 4(2), 231-244. Obtenido de http://www.scielo.unal.edu.co/scielo.php?pid=S1657-92672005000200010&script=sci_abstract&tlng=es | |
dc.relation.references | Banginwar, S. A. (2020). Impact of internet on doctor-patient relationship. International Journal of Basic & Clinical Pharmacology, 9(5), 731-735. doi:doi:http://dx.doi.org/10.18203/2319-2003.ijbcp20201748 | |
dc.relation.references | Bergson, H. (1988). Essai sur les données immédiates de la conscience. Félix Alcan. | |
dc.relation.references | Bhuiyan , J., & Robins-Early, N. (14 de 06 de 2023). The EU is leading the way on AI laws. The US is still playing catch-up. Obtenido de The Guardian: https://www.theguardian.com/technology/2023/jun/13/artificial-intelligence-us-regulation | |
dc.relation.references | Big tech spends more on lobbying than pharma, finance and chemicals firms combined: report. (07 de 09 de 2021). Obtenido de https://www.campaignlive.co.uk/article/big-tech-spends lobbying-pharma-finance-chemicals-firms-combined-report/1726554 | |
dc.relation.references | Binagwaho, A., Mathewos, K., & Davis, S. (2021). Time for the ethical management of COVID 19 vaccines. The Lancet. Global health, 9(8). doi:DOI: 10.1016/S2214-109X(21)00180 | |
dc.relation.references | Blease, C. (2023). Open AI meets open notes: surveillance capitalism, patient privacy and online record access. Journal of Medical Ethics. doi:http://orcid.org/0000-0002-0205-1165 | |
dc.relation.references | Bogataj, T. (2023). Chapter 1 - Unpacking digital sovereignty through data governance by Melody Musoni 1. Obtenido de European Centre for Development Policy Management: https://policycommons.net/artifacts/3846704/chapter-1/4652659/ on 20 Dec 2023. CID: 20.500.12592/fpkqh2. | |
dc.relation.references | Boldt, J. (2019). The concept of vulnerability in medical ethics and philosophy. Philosophy, Ethics, and Humanities in Medicine, 14(6), 1-8. doi:https://doi.org/10.1186/s13010-019-0075-6 | |
dc.relation.references | Bostrom , N. (2005). The Fable of the Dragon-Tyrant. Journal of Medical Ethics, 31(5), 273-277. doi:chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://nickbostrom.com/fable/dragon.pdf | |
dc.relation.references | Bostrom, N. &. (2014). The Ethics of Artificial Intelligence. En F. &Amp, & W. Ramsey, Cambridge Handbook of Artificial Intelligence. Cambridge University Press. doi:doi:10.1017/CBO9781139046855.020 | |
dc.relation.references | Bowlby, J. (1969). Attachment and loss. Hogarth Press and the Institute of Psycho-Analysis, 1. | |
dc.relation.references | Bowles, C. (2020). Future Ethics: the must-read guide to the ethics of emerging tech. NowNext. | |
dc.relation.references | Brandts-Longtin, O., Lalu, M., A Adie, E., Albert, M., Almoli, E., Almoli, F., . . . Montroy, J., P. (2022). Assessing the impact of predatory journals on policy and guidance documents: a cross-sectional study protocol. BMJ open, 12(4), e059445. doi:https://doi.org/10.1136/bmjopen-2021-059445 | |
dc.relation.references | Buccella, A. (2023). "AI for all” is a matter of social justice. AI Ethics, 1143–1152. doi:https://doi.org/10.1007/s43681-022-00222-z | |
dc.relation.references | Byrne, R., & Whiten, A. (1998). Machiavellian Intelligence Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford University Press. | |
dc.relation.references | Calvo, T. &. (1991). Paul Ricoeur: los caminos de la interpretación. Barcelona, España: Anthropos. | |
dc.relation.references | Caplan, A. L. (1980). Ethical engineers need not apply: The state of applied ethics today. Science, Technology, & Human Values, 5(4), 24-32. | |
dc.relation.references | Capuzzo, K. (13 de 06 de 2023). 4 Step Guide on How to Transition from Healthcare to Tech. Obtenido de https://blog.qwasar.io/blog/4-step-guide-on-how-to-transition-from healthcare-to-tech | |
dc.relation.references | Carnemolla, P. (2018). Ageing in place and the internet of things – how smart home technologies, the built environment and caregiving intersect. Vis. in Eng, 6(7). doi:https://doi.org/10.1186/s40327-018-0066-5 | |
dc.relation.references | Castro-Gómez, S. (2012). Sobre el concepto de antropotécnica en Peter Sloterdijk. Revista de Estudios Sociales, 43, 63-73. Obtenido de http://journals.openedition.org/revestudsoc/7127 | |
dc.relation.references | Cenci, A., & Cawthorne , D. (2020). Refining Value Sensitive Design: A (Capability-Based) Procedural Ethics Approach to Technological Design for Well-Being. Sci Eng Ethics, 26, 2629–2662. doi:https://doi.org/10.1007/s11948-020-00223-3 | |
dc.relation.references | Chugunova, M., & Sele, D. (2022). We and It: An Interdisciplinary Review of the Experimental Evidence on How Humans Interact with Machines. Journal of Behavioral and Experimental Economics, 99(101897), 102. doi:http://dx.doi.org/10.2139/ssrn.3692293 | |
dc.relation.references | Cipolla, C. (2018). Designing for Vulnerability: Interpersonal Relations and Design. The Journal of Design, Economics, and Innovation, 4(1), 111-122. doi:doi:https://doi.org/10.1016/j.sheji.2018.03.001 | |
dc.relation.references | Clark, A., & Chalmers, D. (2011). La mente extendida. CIC. Cuadernos de Información y Comunicación, 16, 15-28. Obtenido de https://www.redalyc.org/articulo.oa?id=93521629002 | |
dc.relation.references | Clusmann, J., Kolbinger, F., Muti, H., Carrero, Z., Eckardt, J.-N., Laleh, N., . . . Kather, J. (2023). The future landscape of large language models in medicine. Commun Med, 3(141). doi:https://doi.org/10.1038/s43856-023-00370-1). | |
dc.relation.references | Coeckelbergh, M. (2010). Health Care, Capabilities, and AI Assistive Technologies. Ethical Theory and Moral Practice, 13, 181–190. doi:doi:https://doi.org/10.1007/s10677-009-9186-2 | |
dc.relation.references | Coeckelbergh, M. (2011). Artificial companions: empathy and vulnerability mirroring in human robot relations. Studies in ethics, law, and technology, 4(3). doi:DOI:10.2202/1941-6008.1126 | |
dc.relation.references | Coeckelbergh, M. (2011). Vulnerable Cyborgs: Learning to Live with our Dragons. Journal of Evolution and Technology, 22(1), 1-9. | |
dc.relation.references | Coeckelbergh, M. (2013). Drones, Information Technology, and Distance: Mapping The Moral Epistemology Of Remote Fighting. Ethics and Information Technology, 15(2). doi:DOI:10.1007/s10676-013-9313-6 | |
dc.relation.references | Coeckelbergh, M. (2013). Human Being@Risk. Enhancement, Technology, and the Evaluation of Vulnerability Transformations. Springer. | |
dc.relation.references | Coeckelbergh, M. (2014). Good healthcare is in the “how”: The quality of care, the role of machines, and the need for new skills. En Machine medical ethics (págs. 33-47). Cham: Springer International Publishing. | |
dc.relation.references | Coeckelbergh, M. (2014). The Moral Standing of Machines: Towards a Relational and Non Cartesian Moral Hermeneutics. Philos. Technol, 27, 61–77. doi:https://doi.org/10.1007/s13347-013-0133-8 | |
dc.relation.references | Coeckelbergh, M. (2015). Artificial agents, good care, and modernity. Theoretical Medicine and Bioethics, 36, 265-277. | |
dc.relation.references | Coeckelbergh, M. (2017). Hacking Technological Practices and the Vulnerability of the Modern Hero. Found Sci, 22, 357–362. | |
dc.relation.references | Coeckelbergh, M. (2017). The Art of Living with ICTs: The Ethics–Aesthetics of Vulnerability Coping and Its Implications for Understanding and Evaluating ICT Cultures. Found Sci,22, 339–348. | |
dc.relation.references | Coeckelbergh, M. (2019). Moved by Machines: Performance Metaphors and Philosophy of Technology. Routledge. | |
dc.relation.references | Coeckelbergh, M. (2020). AI Ethics. The MIT Press. | |
dc.relation.references | Coeckelbergh, M. (2020). Technoperformances: using metaphors from the performance arts for a postphenomenology and posthermeneutics of technology use. AI & SOCIETY, 35(3), 557-568. doi:https://doi.org/10.1007/s00146-019-00926-7 | |
dc.relation.references | Coeckelbergh, M. (2020). The Postdigital in Pandemic Times: a Comment on the Covid-19 Crisis and its Political Epistemologies. Postdigit Sci Educ, 2, 547–550. doi:https://doi.org/10.1007/s42438-020-00119-2 | |
dc.relation.references | Coeckelbergh, M. (2021). Time Machines: Artificial Intelligence, Process, and Narrative. Philos. Technol, 34, 1623–1638. doi:https://doi.org/10.1007/s13347-021-00479-y | |
dc.relation.references | Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction. | |
dc.relation.references | Coeckelbergh, M., & Wackers, G. (2007). Imagination, distributed responsibility and vulnerable technological systems: the case of Snorre A. SCI ENG ETHICS, 13, 235–248. doi:https://doi.org/10.1007/s11948-007-9008-7 | |
dc.relation.references | Collins, R. (2005). Interaction Ritual Chains. Princeton Univerity Press. | |
dc.relation.references | Cook, A., Thompson, M., & Ross, P. (2023). Virtual first impressions: Zoom backgrounds affect judgements of trust and competence. Plos One. doi:https://doi.org/10.1371/journal.pone.0291444 | |
dc.relation.references | Cooper, A, & Rodman, A. (2023). AI and Medical Education - A 21st-Century Pandora's Box. The New England journal of medicine, 389(5), 385–387. doi:https://doi.org/10.1056/NEJMp2304993 | |
dc.relation.references | Couture, S., & Toupin, S. (12 de 08 de 2019). What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, 21(10). doi:https://doi.org/10.1177/1461444819865984 | |
dc.relation.references | Cowie, M. &. (2021). Remote monitoring and digital health tools in CVD management. Nature Reviews Cardiology, 18, 457–458. doi:doi.org/10.1038/s41569-021-00548-x | |
dc.relation.references | Crawford, K. (04 de 06 de 2021). Time to regulate AI that interprets human emotions. Nature. doi:https://doi.org/10.1038/d41586-021-00868-5 | |
dc.relation.references | Cummings, M. L. (2006). Integrating ethics in design through the value-sensitive design approach. Science and engineering ethics, 12, 701–715. doi:https://doi.org/10.1007/s11948-006-0065-0 | |
dc.relation.references | Dalton-Brown, S. (2020). The Ethics of Medical AI and the Physician-Patient Relationship. Camb Q Healthc Ethics, 29(1), 115-121. doi:doi: 10.1017/S0963180119000847 | |
dc.relation.references | D'Amore, F., & Pirone , F. (2018). Doctor 2.0 and i-Patient: information technology in medicine and its influence on the physician-patient relation. Italian Journal Of Medicine, 12(1). doi:https://doi.org/10.4081/itjm.2018.956 | |
dc.relation.references | Davies, R, Ives, J, & Dunn, M. A . (2015). A systematic review of empirical bioethics methodologies. BMC Med Ethics, 16(15). doi:https://doi.org/10.1186/s12910-015-0010-3 | |
dc.relation.references | de Boer, B. (2021). Explaining multistability: postphenomenology and affordances of technologies. AI & Soc. doi:https://doi.org/10.1007/s00146-021-01272-3 | |
dc.relation.references | de Vries, M. J. (2005). Technology and the nature of humans. En M. J. de Vries, Teaching about Technology: An Introduction to the Philosophy of Technology for non-philosophers. Springer. | |
dc.relation.references | DeCamp, M, & Tilburt, J. C. (2019). Why we cannot trust artificial intelligence in medicine. Lancet Digit Health, 1(8), 30197-9. doi:doi: 10.1016/S25897500(19)30197-9 | |
dc.relation.references | Demographic Change and Healthy Ageing, Health Ethics & Governance (HEG). (2022). Ageism in artificial intelligence for health. WHO. | |
dc.relation.references | Departamento Administrativo de la Presidencia de la República. (30 de 03 de 2021). Obtenido de Cámara Colombiana dde Informática y Telecomunicaciones: https://dapre.presidencia.gov.co/TD/MARCO-ETICO-PARA-LA-INTELIGENCIA ARTIFICIAL-EN-COLOMBIA-2021.pdf | |
dc.relation.references | Dorr Goold, S. &. (1999). The doctor-patient relationship: challenges, opportunities, and strategies. Journal of general internal medicine, 14(Suppl 1(Suppl 1), S26–S33), 26-33. doi: 10.1046/j.1525-1497.1999.00267.x. | |
dc.relation.references | Doudna, J. &. (2014). The new frontier of genome engineering with CRISPR-Cas9. Science, 346(6213).DOI: 10.1126/science.1258096 | |
dc.relation.references | Doudna, J., & Charpentier, E. (2014). The new frontier of genome engineering with CRISPR Cas9. Science, 346(6213). doi:doi:DOI: 10.1126/science.1258096 | |
dc.relation.references | Drummond, D. (2021). Between competence and warmth: the remaining place of the physician in the era of artificial intelligence. npj Digit. Med, 4(85). doi:https://doi.org/10.1038/s41746-021-00457-w | |
dc.relation.references | Duarte, A. (2004). Biopolítica y diseminación de la violencia Arendt y la crítica del presente. Pasajes: Revista de pensamiento contemporáneo, 97-105. | |
dc.relation.references | Dubov, A., & Shoptawb, S. (2020). The Value and Ethics of Using Technology to Contain the COVID-19 Epidemic. he American journal of bioethics : AJOB, 20(7), W7–W11. doi:https://doi.org/10.1080/15265161.2020.1764136 | |
dc.relation.references | Dusek, V. (2009). What Is Technology? Defining or Characterizing Technology. En V. Dusek, Philosophy of Technology: An Introduction (págs. 26-37). Oxford: Blackwell Publishing. | |
dc.relation.references | Dutt, R. (2020). The impact of artificial intelligence on healthcare insurances. Artificial Intelligence in Healthcare, 271-293. doi:DOI:10.1016/B978-0-12-818438 7.00011-3 | |
dc.relation.references | Eichler, E. (2019). Genetic Variation, Comparative Genomics,and the diagnosis of disease. N Engl J Med, 381, 64-74. | |
dc.relation.references | Ekmekci, P., & Arda, B. (2020). Artificial Intelligence and Bioethics. Switzerland: Springer.doi.org/10.1007/978-3-030-52448-7 | |
dc.relation.references | Emanuel , E. J., & Dubler, N. N. (1995). Preserving the physician-patient relationship in the era of managed care. JAMA, 273(4), 323–329. | |
dc.relation.references | Emery, Fred E, & Eric L. Trist. (1960). Socio-technical systems. Management science, models and techniques, 2, 83-97. | |
dc.relation.references | Ezekiel, E., & Ezekiel, L. (1992). Four Models of the Physician-Patient Relationship Four Models of the Physician-Patient Relationship. JAMA. | |
dc.relation.references | Fernandes Martins, M., & Murry, L. T. (2022). Direct-to-consumer genetic testing: an updated systematic review of healthcare professionals’ knowledge and views, and ethical and legal concerns. European Journal of Human Genetics, 30, 1331–1343. doi: https://doi.org/10.1038/s41431-022-01205-8 | |
dc.relation.references | Ferraris, M. (1998). La Hermenéutica. Taurus. | |
dc.relation.references | Fineman, M. (2008). The Vulnerable Subject: Anchoring Equality in the Human Condition. Yale Journal of Law & Feminism, 20(1), 1-24. | |
dc.relation.references | FirstPost. (05 de 06 de 2023). AI Groom: US woman creates AI bot, marries it and starts family, calls 'him' the perfect husband. Obtenido de https://www.firstpost.com/world/us-woman creates-ai-bot-marries-it-and-starts-family-calls-him-the-perfect-husband-12693012.html | |
dc.relation.references | Fogg, B. J. (2003). Persuasive Technology: Using Computers to Change What We Think and Do. San Francisco, Estados Unidos: Morgan Kaufmann Publishers. | |
dc.relation.references | Forbes. (29 de 04 de 2021). How Digital Transformation Impacts The Future Of Career Transitions. Obtenido de https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/04/29/how-digital transformation-impacts-the-future-of-career-transitions/?sh=59ab9f5e3f71 | |
dc.relation.references | Forbes Colombia. (29 de 05 de 2022). Cómo abogados, psicólogos y hasta filósofos se están convirtiendo desarrolladores de software y científicos de datos. Obtenido de Forbes: https://forbes.co/2022/05/29/editors-picks/como-abogados-psicologos-y-hasta-filosofos se-estan-convirtiendo-desarrolladores-de-software-y-cientificos-de-dato | |
dc.relation.references | Fosso Wamba, S., Bawack, E., Guthrie, C., Queiroz, M., & André Carillo, K. (2021). Are we preparing for a good AI society? A bibliometric review and research agenda. Technological Forecasting and Social Change, 167(120482), 1-27. doi:doi:doi.org/10.1016/j.techfore.2020.120482 | |
dc.relation.references | Foucault, M. (2002). Vigilar y castigar: Nacimiento de la presión. Siglo XXI. Obtenido de chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.ivanillich.org.mx/Foucault Castigar.pdf | |
dc.relation.references | Fox, B. (29 de 06 de 2022). Healthcare Companies Spent More on Lobbying Than Any Other Industry Last Year. Obtenido de Promarket: https://www.promarket.org/2022/06/29/healthcare-companies-spent-more-on-lobbying than-any-other-industry-last-year | |
dc.relation.references | Future Of Life Institute. (2017). Principios de IA de Asilomar. Obtenido de https://futureoflife.org/open-letter/ai-principles/ | |
dc.relation.references | Gartner. (s.f.). Hype Cycle de Gartner. Obtenido de https://www.gartner.es/es/metodologias/hype-cycle | |
dc.relation.references | Gaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S., Lermer, E., . . . Ghassemi , M. (2021). Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digital Medicine , 4(31). doi:doi:doi.org/10.1038/s41746-021-00385-9 | |
dc.relation.references | Gegúndez Fernández, J. (2018). Technification versus humanisation. Artificial intelligence for medical diagnosis. Arch Soc Esp Oftalmol (Engl Ed), 93(3), e17-e19. DOI: 10.1016/j.oftal.2017.11.004. | |
dc.relation.references | Giddens, A. (1991). Modernity and Self Identity: Self and Society in the Late Modern Age.Cambridge: Polity Press. | |
dc.relation.references | Github. (2020). Citaciones abriertas/coronavirus. Obtenido de https://github.com/opencitations/coronavirus/blob/master/data/dois_no_ref.csv | |
dc.relation.references | Gleeson, D., Townsend, B., Tenni, B., & Phillips, T. (2023). Global inequities in access to COVID-19 health products and technologies: A political economy analysis. Health Place, 83(103051). doi:DOI: 10.1016/j.healthplace.2023.103051 | |
dc.relation.references | Glenn, J. C., & Gordon, T. J. (30 de 04 de 2009). Futures Research Methodology — Version 3.0. (T. M. Project, Ed.) Washington, D.C. | |
dc.relation.references | Goasduff, L. (11 de 04 de 2022). Choose Adaptive Data Governance Over One-Size-Fits-All for Greater Flexibility. Obtenido de Gartner: https://www.gartner.com/en/articles/choose adaptive-data-governance-over-one-size-fits-all-for-greater-flexib | |
dc.relation.references | Gómez-Vírseda, C, de Maeseneer, Y, & Gastmans, C. (2020). Relational autonomy in end-of-life care ethics: a contextualized approach to real-life complexities. BMC Med Ethics, 21(50). doi:https://doi.org/10.1186/s12910-020-00495-1 | |
dc.relation.references | Goodday, S. M., Geddes, J. R., & Friend, S. H. (2021). Disrupting the power balance between doctors and patients in the digital era. The Lancet. Digital health, 3(3), e142–e143. doi:https://doi.org/10.1016/S2589-7500(21)00004-2 | |
dc.relation.references | Gouldner, A. (1960). The Norm of Reciprocity: A Preliminary Statement. American Sociological Review, 25(2), 161-178. doi:https://doi.org/10.2307/2092623 | |
dc.relation.references | Gravett, W. (2022). Digital neocolonialism: the Chinese surveillance state in Africa. African Journal of International and Comparative Law, 30(1), 39-58. | |
dc.relation.references | Gröger, C. (2021). There is no AI without data. Association for Computing Machinery, 64(11), 98–108. doi:https://doi.org/10.1145/3448247 | |
dc.relation.references | Gu, H. (2023). Data, Big Tech, and the New Concept of Sovereignty. J OF CHIN POLIT SCI . doi:https://doi.org/10.1007/s11366-023-09855-1 | |
dc.relation.references | H+Pedia. (2023). Obtenido de https://hpluspedia.org/wiki/File:Transhumanism_Futures_Wheel.png | |
dc.relation.references | Haan, M, Ongena, Y. P., Hommes, S, & Kwee, T. C . (2019). A Qualitative Study to Understand Patient Perspective on the Use of Artificial Intelligence in Radiology. Journal of the American College of Radiology : JACR, 16(10), 1416–1419. doi:https://doi.org/10.1016/j.jacr.2018.12.043 | |
dc.relation.references | Haas, B. (04 de 04 de 2017). Chinese man 'marries' robot he built himself. Obtenido de The Guardian: https://www.theguardian.com/world/2017/apr/04/chinese-man-marries-robot built-himself | |
dc.relation.references | Haenlein, M., & Kaplan, A. (2019). A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review, 61(4), 5-14. doi:doi:doi.org/10.1177/0008125619864925 | |
dc.relation.references | Hamet, P., & Tremblay, J. (2017). Artificial intelligence in medicine. Metabolism. Metabolism: clinical and experimental(69(Supplement), S36-S40. ). doi:https://doi.org/10.1016/j.metabol.2017.01.011 | |
dc.relation.references | Hannibal, G. (2021). Focusing on the Vulnerabilities of Robots through Expert Interviews for Trust in Human-Robot Interaction. Association for Computing Machinery, 288–293. doi:10.1145/3434074.3447178 | |
dc.relation.references | Hannibal, G. (s.f.). Trust in HRI: Probing Vulnerability as an Active Precondition. 2021. Obtenido de https://www.youtube.com/watch?v=DzpdRXZgMwk | |
dc.relation.references | Hannibal, Glenda, & Weiss, Astrid. (s.f.). Exploring the Situated Vulnerabilities of Robots for Interpersonal Trust in Human-Robot Interaction. En Trust in Robots (págs. 33–56). TU Wien Academic Press. doi:https://doi.org/10.34727/2022/isbn.978-3-85448-052-5_2 | |
dc.relation.references | Hansson, S. O. (2009). Philosophy of Medical Technology. En A. Meijers, Philosophy of Technology and Engineering Sciences (Vol. 9). Amsterdam, North Holland. | |
dc.relation.references | Harari, Y. N. (2017). Homo Deus: A Brief History of Tomorrow. Harper. | |
dc.relation.references | Harari, Y. N. (10 de 2018). Why Technology Favors Tyranny. The Atlantic . Obtenido de https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology tyranny/568330/ | |
dc.relation.references | Harris, J. (2023). An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA, 330(9), 801-802. doi:DOI: 10.1001/jama.2023.14525 | |
dc.relation.references | Hassanpour, A., Nguyen, A., Rani, A., Shaikh, S., Xu, Y., & Zhang, H. (2022). Big Tech Companies Impact on Research at the Faculty of Information Technology and Electrical Engineering. Computers and Society. doi:https://arxiv.org/abs/2205.01039 | |
dc.relation.references | Hazarika, I. (2020). Artificial intelligence: opportunities and implications for the health workforce. International health, 12(4), 241–245. doi:DOI: 10.1093/inthealth/ihaa007 | |
dc.relation.references | Heidegger, M. (1999). The Question Concerning Technology. En M. Heidegger, Basic Writings.Londres: Routledge. | |
dc.relation.references | Hendrikse, R., Adriaans, I., Klinge, T., & Fernández, R. (2021). The Big Techification of Everything. Science as Culture, 31(1), 59-71. doi:DOI:10.1080/09505431.2021.1984423 | |
dc.relation.references | Herkert, J. (2005). Ways of thinking about and teaching ethical problem solving: Microethics and macroethics in engineering. SCI ENG ETHICS , 11, 373–385. doi:https://doi.org/10.1007/s11948-005-0006-3 | |
dc.relation.references | Heyen, N.B., & Salloch, S. (2021). The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory. BMC Medical Ethics, 22(112). doi:https://doi.org/10.1186/s12910-021-00679-3 | |
dc.relation.references | Hinde, R. A. (1976). Interactions, Relationships and Social Structure. Man, 11(1), 1-17. doi:https://doi.org/10.2307/2800384 | |
dc.relation.references | Hinde, R. A. (1997). Relationships: A Dialectical Perspective. Psychology Press. | |
dc.relation.references | Hoc, J. M. (2000). From human-machine interaction to human-machine cooperation. Ergonomics, 47(7), 833-843. doi:doi:10.1080/001401300409044 | |
dc.relation.references | Hofmann, B. (2001). The technological invention of disease. Journal of Medical Ethics: Medical Humanities, 27, 10-19. | |
dc.relation.references | Hommels, A., Mesman, J., & Bijker, W. E. (2014). Vulnerability Technological Cultures: New Directions in Research and Governance. The MIT press. doi:https://doi.org/10.7551/mitpress/9209.001.0001 | |
dc.relation.references | Idhe, D. (2009). Postphenomenology and Technoscience: The Peking University Lectures. State University New York Press. | |
dc.relation.references | Ienca, M., & Vayena, E. (2020). On the responsible use of digital data to tackle the COVID-19 pandemic. Nature Medicine, 26, 463–464. doi:doi:https://doi.org/10.1038/s41591-020-0832-5 | |
dc.relation.references | Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Indiana University Press. | |
dc.relation.references | Ihde, D. (2015). Acoustic Technics. Lexington Books. | |
dc.relation.references | Ihde, D. (2019). Medical Technics. Minnesota: University of Minnesota Press. | |
dc.relation.references | Intelligence, N. M. (2021). People have the AI power. Nat Mach Intell, 3(275). doi: https://doi.org/10.1038/s42256-021-00340-z | |
dc.relation.references | Interactions, Relationships and Social Structure. (1976). Man, 11(1), 1-17. doi:https://doi.org/10.2307/2800384 | |
dc.relation.references | Jeanne, L., Bourdin, S., Nadou, F., & Noiret , G. (2023). Economic globalization and the COVID-19 pandemic: global spread and inequalities. GeoJournal, 88, 1181–1188. doi:https://doi.org/10.1007/s10708-022-10607-6 | |
dc.relation.references | Jecker, N. S. (2021). Nothing to be ashamed of: sex robots for older adults with disabilities. Journal of Medical Ethics, 47(1). doi:doi:dx.doi.org/10.1136/medethics-2020-106645 | |
dc.relation.references | Jinek, M., Chylinski, K., Fonfara, I., Hauer, M., & Doudna, J. (2017). A programmable dual RNA-guided DNA endonuclease in adaptive bacterial immunity. Science, 337(6096), 816-821. doi: 10.1126/science.1225829 | |
dc.relation.references | Johannsen, G. (2019). Human-Machine Interaction (Vol. 21). CIRP Encyclopedia of Production Engineering. | |
dc.relation.references | Jonas, H. (2000). The Vulnerability of the Human Condition. En &. P. J. Rendtorff, Basic Ehical Principle in Bioethics and Biolaw. Autonomy, Dignity, Integrity and Vulnerability (págs. 115-122). Centre for Ethics and Law/Institut Borja de Bioética. | |
dc.relation.references | Kaltenbach, T. (2014). The impact of E-health on the pharmaceutical industry. International Journal of Healthcare Management, 7(4), 223-225. doi:doi:10.1179/2047970014Z.000000000103 | |
dc.relation.references | Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25. doi:doi:doi.org/10.1016/j.bushor.2018.08.004 | |
dc.relation.references | Karches, K. (2018). Against the iDoctor: why artificial intelligence should not replace physician judgment. Theor Med Bioeth, 39(91), 91–110. doi:https://doi.org/10.1007/s11017-018-9442-3 | |
dc.relation.references | Karches, K. E. (2018). Against the iDoctor: why artificial intelligence should not replace physician judgment. Theoretical Medicine and Bioethics, 39, 91–110. doi:https://doi.org/10.1007/s11017-018-9442-3 | |
dc.relation.references | Kasperbauer , T. (2020). Conflicting roles for humans in learning health systems and AI-enabled healthcare. Journal Of Evaluation in Clinical Practice, 27(3). doi:https://doi.org/10.1111/jep.13510 | |
dc.relation.references | Kickbusch , I., Piselli, D., Agrawal, A., Balicer, R., Banner, O., Adelhardt , M., . . . Xue, L. (2021). The Lancet and Financial Times Commission on governing health futures 2030: growing up in a digital world. Lancet (London, England), 398(10312), 1727–1776. doi:https://doi.org/10.1016/S0140-6736(21)01824-9 | |
dc.relation.references | Kim, J. (2005). Physicalism, or Something Near Enough. Princeton Monographs in Philosophy. | |
dc.relation.references | Kittay, E. (2011). The Ethics of Care, Dependence, and Disability. Ratio Juris, 24(1), 49-58. | |
dc.relation.references | Komasawa, N., & Yokohira, M. (2023). Simulation-Based Education in the Artificial Intelligence Era. Cureus, 15(6), e40940. doi:https://doi.org/10.7759/cureus.40940 | |
dc.relation.references | Koncz, A. (13 de 09 de 2022). The First Database Of Tech Giants Collaborating With Healthcare: What Can We Learn? The Medical Futurist. Obtenido de https://medicalfuturist.com/the-first-database-of-tech-giants-collaborating-with healthcare-what-can-we-learn/ | |
dc.relation.references | Koncz, A. (19 de 10 de 2023). Digital Health Anxiety: When Wellness Tech Becomes A Stressor. Obtenido de The Medical Futurist: https://medicalfuturist.com/digital-health-anxiety when-wellness-tech-becomes-a-stressor/ | |
dc.relation.references | Kshetri, N. (2023). ChatGPT in Developing Economies. IT Professional, 25(2), 16–19. doi:https://doi.org/10.1109/MITP.2023.3254639 | |
dc.relation.references | Kudina, O. (2019). The technological mediation of morality: Value dynamism, and the complex. PhD dissertation. Enschede: University of Twente. | |
dc.relation.references | Kudina, O., & Coeckelbergh, M. (2021). Alexa, define empowerment: voice assistants at home, appropriation and technoperformances. Journal of Information, Communication and Ethics in Society, 19(2), 299-312. doi:doi:10.1108/JICES-06-2020-0072 | |
dc.relation.references | Kumar, K., Kumar, N., & Rachna, S. R. (2020). Role of IoT to avoid spreading of COVID-19. International Journal of Intelligent Networks, 1, 32-35. doi:doi:https://doi.org/10.1016/j.ijin.2020.05.002. | |
dc.relation.references | Lage Gonçalves, L., Nardi, A., & Spear King, A. (2023). Digital Dependence in Organizations: Impacts on the Physical and Mental Health of Employees. Clinical Practice And Epidemiology In Mental Health, 19. doi:DOI: 10.2174/17450179-v19-e230109-2022-17 | |
dc.relation.references | Latour, B. (1987). Science in Action: how to Follow Scientists and Engineers through Society.Cambridge: Harvard University Press. | |
dc.relation.references | Lee, K.-F., & Li, K. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Boston, Estados Unidos: Houghton Mifflin Harcourt. | |
dc.relation.references | Lee, N. (2019). Brave New World of Transhumanism. En N. Lee, The Transhumanism Handbook. Switzerland: Springer Nature. | |
dc.relation.references | Leite, H., Hodgkinson, I. R., & Gruber, T. (2020). New development: ‘Healing at a distance’—telemedicine and COVID-19. Public Money & Management, 40(6), 483-485. doi:doi:10.1080/09540962.2020.1748855 | |
dc.relation.references | Lerner, I., Veil, R., Nguyen, D.-P., Phuc Luu, V., & Jantzen, R. (2018). Revolution in Health Care: How Will Data Science Impact Doctor–Patient Relationships? Frontiers in Public Health, 6. doi:https://doi.org/10.3389/fpubh.2018.00099 | |
dc.relation.references | Levinas, E. (1972). Humanismo del otro hombre. Buenos Aires, Argentina: Siglo XXI. | |
dc.relation.references | Lin, S. Y., Mahoney, M. R., & Sinsky, C. A. (2019). Ten Ways Artificial Intelligence Will Transform Primary Care. J GEN INTERN MED, 34, 626–1630. doi:https://doi.org/10.1007/s11606-019-05035-1 | |
dc.relation.references | Liu, X., Keane, P., & Denniston, A. (2018). Time to regenerate: the doctor in the age of artificial intelligence. Journal of the Royal Society of Medicine, 111(4), 113-116. doi:https://doi.org/10.1177/0141076818762648 | |
dc.relation.references | Loten, A., & Bousquette, I. (01 de 09 de 2022). Tech Companies Say Going Private Comes With Benefits. Obtenido de The Wall Street Journal: Tech Companies Say Going Private Comes With Benefits | |
dc.relation.references | Luchini, C., Pea, A., & Scarpa, A. (2022). Artificial intelligence in oncology: current applications and future perspectives. British Journal of Cancer, 126, 4–9 . doi:https://doi.org/10.1038/s41416-021-01633-1 | |
dc.relation.references | Lupton, D. (2020). A more-than-human approach to bioethics: The example of digital health. Bioethics, 34, 1-8. doi:doi:10.1111/bioe.12798 | |
dc.relation.references | Luxton, D. (2014). Recommendations for the ethical use and design of artificial intelligent care providers. Artificial intelligence in medicine, 62(1), 1-10. doi:DOI: 10.1016/j.artmed.2014.06.004 | |
dc.relation.references | Machine, N. (2021). People have the AI power. Nature Machine Intelligence, 3(275). doi:https://doi.org/10.1038/s42256-021-00340-z | |
dc.relation.references | Mackenzie, C, Rogers, W, & Dodds, S. (2013). Vulnerability: New Essays in Ethics and Feminist Philosophy. Oxford University Press. | |
dc.relation.references | Maliandi, R. (2010). Fenomenología de la conflictividad. Las Cuarenta. DOI:10.24316/prometeica.v0i3.64 | |
dc.relation.references | McAninch , A. (2023). Go Big or Go Home? A New Case for Integrating Micro-ethics and Macro-ethics in Engineering Ethics Education. Science and engineering ethics, 29(3), 20. doi:https://doi.org/10.1007/s11948-023-00441-5 | |
dc.relation.references | McKendrick, J. (05 de 07 de 2016). Is All-Cloud Computing Inevitable? Analysts Suggest It Is. Obtenido de Forbes: https://www.forbes.com/sites/joemckendrick/2016/07/05/is-all cloud-computing-inevitable-analysts-suggest-it-is/?sh=71497bccebf0 | |
dc.relation.references | McWhinney, I. R. (1978). Medical Knowledge and the Rise of Technology. The Journal of Medicine and Philosophy, 3(4), 293-304. | |
dc.relation.references | Mello, M. M., & Wang, J. C. (2020). Ethics and governance for digital disease surveillance. Science, 368(6494), 951-954. doi: doi:10.1126/science.abb9045 | |
dc.relation.references | Menéndez, E. (2020). Modelo médico hegemónico: tendencias posibles y tendencias más o menos imaginarias. Salud Colectiva, 16. doi:https://doi.org/10.18294/sc.2020.2615 | |
dc.relation.references | Miller, B., Blanks, W., & Yagi , B. (2023). The 510(k) Third Party Review Program: Promise and Potential. J Med Syst , 47(93). doi:https://doi.org/10.1007/s10916-023-01986-5 | |
dc.relation.references | Mims, C. (08 de 04 de 2013). Is Big Tech’s R&D Spending Actually Hurting Innovation in the U.S.? Obtenido de The Wall Street Journal: https://www.wsj.com/articles/is-big-techs-r d-spending-actually-hurting-innovation-in-the-u-s-acfa004e). | |
dc.relation.references | Mittelstadt, B. (2021). The impact of artificial intelligence on the doctor-patient relationship.Council of Europe. Obtenido de https://www.coe.int/en/web/bioethics/report-impact-of ai-on-the-doctor-patient-relationship | |
dc.relation.references | Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philos. Technol, 33, 659–684. | |
dc.relation.references | Moreno Hernández, H. (2008). Profanación a la biopolítica: a propósito de giorgio agamben. Revista de Ciencias Sociales de la Universidad Iberoamericana, III(6). | |
dc.relation.references | Morozov, E. (2013). To save everithing, click here: the folly of technological solutionism. New York: PublicAffairs. | |
dc.relation.references | Morton , C., Smith , S., Lwin, T., George, M., & Williams, M. (2019). Computer Programming: Should Medical Students Be Learning It? JMIR Med Educ , 5(1). | |
dc.relation.references | Muehlematter, U., Bluethgen, C., & Vokinger, K. (2023). FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks. The Lancet Digital Healt, 5(9). doi:DOI:https://doi.org/10.1016/S2589-7500(23)00126-7 | |
dc.relation.references | Münch, C., Marx , E., Benz, L., Hartmann, E., & Matzner, M. (2022). Capabilities of digital servitization: Evidence from the socio-technical systems theory. Technological Forecasting and Social Change, 176(121361). doi:https://doi.org/10.1016/j.techfore.2021.121361. | |
dc.relation.references | Munn, L. (2023). The uselessness of AI ethics. Ética de la IA, 3, 869–877 . doi:https://doi.org/10.1007/s43681-022-00209-w | |
dc.relation.references | Murphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J., . Gibson, J. (2021). Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics, 22(14). doi:https://doi.org/10.1186/s12910-021-00577-8 | |
dc.relation.references | Neves, M. P. (2007). Sentidos da vulnerabilidade; característica, condição, princípio. Revista Brasileira de Bioética, 2, 157-172. | |
dc.relation.references | Ng, A. (2019). Coursera. Obtenido de ). IA para todos by deeplearning.ai.: www.coursera.org | |
dc.relation.references | North Whitehead, A. (1978). Process And Reality An Essay In Cosmology An Essay In Cosmology. The Free Prees. | |
dc.relation.references | Nundy S, Montgomery T, & Wachter RM. (2019). Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. JAMA, 322(6), 497–498. doi:10.1001/jama.2018.20563 | |
dc.relation.references | Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the Future — Big Data, Machine Learning, and Clinical Medicine. New England Journal of Medicine, 375(13), 1216-1219. doi.org/10.1056/NEJMp1606181 | |
dc.relation.references | Olaronke, I., Oluwaseun, O., & Rhoda, I. (2017). State of the art: a study of human-robot interaction in healthcare. International Journal of Information Engineering and Electronic Business, 9(3), 43-55. doi:doi:10.5815/ijieeb.2017.03.06 | |
dc.relation.references | Oosterlaken, I. (2014). Human Capabilities in Design for Values. En van den Hoven, J, Vermaas, P, & van de Poel, I, Handbook of Ethics, Values, and Technological Design (págs. 1–26). Springer. Obtenido de https://doi.org/10.1007/978-94-007-6994-6_7-1 | |
dc.relation.references | Ostherr, K. (2020). Artificial Intelligence and Medical Humanities. J Med Humanit, 43, 211–232. doi:https://doi.org/10.1007/s10912-020-09636-4 | |
dc.relation.references | Pellegrino, E. D., & Thomasma, D. C. (1993). The Virtues in Medical Practice. New York, Estados Unidos: Oxford University Press. | |
dc.relation.references | Pelluchon, C. (2013). Elementos para una ética de la vulnerabilidad. Los hombres, los animales, la naturaleza. Bogotá, Colombia: U. Javeriana; Universidad El bosqu. | |
dc.relation.references | Pelluchon, C. (2013). La Autonomia Quebrada: Bioética y filosofía. Bogotá, Colombia: Universidad El Bosque. | |
dc.relation.references | Pelluchon, C. (2015). Elementos para una ética de la vulnerabilidad. Los hombres, los animales, la naturaleza. Pontificia Universidad Javeriana. | |
dc.relation.references | Pelluchon, C. (2016). Taking Vulnerability Seriously: What Does It Change for Bioethics and Politics? En A. Masferrer,, & E. García-Sánchez (Edits.), Human Dignity of the Vulnerable in the Age of Rights (Vol. 55). Obtenido de https://doi.org/10.1007/978-3-319-32693-1_13 | |
dc.relation.references | Perri, L. (23 de 08 de 2023). What’s New in the 2023 Gartner Hype Cycle for Emerging Technologies. Gartner. Obtenido de https://www.gartner.com/en/articles/what-s-new-in the-2023-gartner-hype-cycle-for-emerging-technologie | |
dc.relation.references | Pharmaceutical-Technology. (09 de 03 de 2021). COVID-19 accelerated digital transformation of the pharma industry by five years: Poll. Obtenido de https://www.pharmaceutical technology.com/news/covid-19-accelerated-digital-transformation-of-the-pharma industry-by-five-years-poll | |
dc.relation.references | Pinto Bustamante, B., Riaño-Moreno, J., Clavijo Montaya, H., Cárdenas Galindo, M., & Campos Figueredo, W. (2023). Bioethics and artificial intelligence: between deliberation on values and rational choice theory. Robot AI, 10. doi:https://doi.org/10.3389/frobt.2023.1140901 | |
dc.relation.references | Ponce-Correa, A., Ospina-Ospina, A., & Correa-Gutierrez, R. (s.f.). Curriculum Analysis Of Ethics In Engineering: A Case Study. DYNA, 89(222), 67-73. doi:https://doi.org/10.15446/dyna.v89n222.101800 | |
dc.relation.references | Prabhu, S. P. (2019). Ethical challenges of machine learning and deep learning algorithms. The Lancet. Oncology, 20(5), 621–622. doi:https://doi.org/10.1016/S1470-2045(19)30230-X | |
dc.relation.references | Prainsack, B, & Forgó, N. (2022). Why paying individual people for their health data is a bad idea. Nature Medicine, 28, 1989–1991. doi:https://doi.org/10.1038/s41591-022-01955-4 | |
dc.relation.references | Prainsack, B., & Van Hoyweghen , I. (2020). Shifting Solidarities: Personalisation in Insurance and Medicine. Cham: Palgrave Macmillan. doi:https://doi.org/10.1007/978-3-030-44062-6_7 | |
dc.relation.references | Prainsack, B., El-Sayed, S., Forgó, N., Szoszkiewicz, Ł., & Baumer, P. (2022). Data solidarity: a blueprint for governing health futures. The Lancet Digital Healt, 4(11), E773-E774. doi:DOI:https://doi.org/10.1016/S2589-7500(22)00189-3 | |
dc.relation.references | Prati, Andrea, Shan, Caifeng, & Wang, Kevin I-Kai. (2019). Sensors, vision and networks: From video surveillance to activity recognition and health monitoring. Journal of Ambient Intelligence and Smart Environments, 11(1), 5-22. doi:10.3233/AIS-180510 | |
dc.relation.references | Psychologs Magazine. (06 de 09 de 2021). Smart Watches And Mental Health. Obtenido de Psychologs Indian's Firts Mental Healt: https://www.psychologs.com/smartwatches-and mental-health/ | |
dc.relation.references | Rackimuthu, S., Narain, K., Lal, A., Nawaz, F., Mohanan,, P., Yasir Essar, M., & Ashworth, H. (2022). Redressing COVID-19 vaccine inequity amidst booster doses: charting a bold path for global health solidarity, together. Globalization and Health, 18(23). doi:https://doi.org/10.1186/s12992-022-00817-5 | |
dc.relation.references | Rampton V. (2020). Artificial intelligence versus clinicians. BMJ (Clinical research ed.), 3(369).DOI: 10.1136/bmj.m1326 | |
dc.relation.references | Raphael, B. (1976). The Thinking Computer: Mind Inside Matter. San Francisco: Freeman and Company. | |
dc.relation.references | Reddy, H., Joshi, S., Joshi, A., & Wagh, V. (2022). A Critical Review of Global Digital Divide and the Role of Technology in Healthcare. Cureus , 14(9), e29739. doi:10.7759/cureus.29739 | |
dc.relation.references | Rezaev, A. V., Starikov, V. S, & Tregubova, N. D. (2018). Sociological Considerations on Human-Machine Interactions: from Artificial Intelligence to Artificial Sociality. Conferencia Internacional sobre Industria, Negocios y Ciencias Sociales (págs. 364-371). Tokio: Waseda University. | |
dc.relation.references | Ricoeur, P. (1996). Les trois niveaux du jugement médical. Esprit. | |
dc.relation.references | Ricoeur, P. (2010). Del texto a la acción. Mexico, D.F: Fondo de Cultura Económica. | |
dc.relation.references | Rostislavovna Schislyaeva, E., & Saychenko, O. (2022). Labor Market Soft Skills in the Context of Digitalization of the Economy. Social Sciences, 11(3). doi:10.3390/socsci11030091 | |
dc.relation.references | Rousseau, J.-J. (2003). Sobre las ciencias y las artes. Madrid: Alianza Editorial. | |
dc.relation.references | Rowley, J. (2007). The wisdom hierarchy: representations of the DIKW hierarchy. Journal of Information Science, 33(2), 163–180. doi:DOI: 10.1177/0165551506070706 | |
dc.relation.references | Ruíz, J., Cantú, G., Ávila, D., Gamboa, J. D., Juarez, L., de Hoyos, A., . . . Garduño, J. (2015). Revisión de modelos para el análisis de dilemas éticos. Boletín Médico del Hospital Infantil de México, 72(2), 89-98. https://doi.org/https://doi.org/10.1016/j.bmhimx.2015.03.006 | |
dc.relation.references | Sappleton, N., & Takruri-Rizk, H. (2008). The Gender Subtext of Science, Engineering, and Technology (SET) Organizations: A Review and Critique. Women's Studies, 37(3). doi:https://doi.org/10.1080/00497870801917242 | |
dc.relation.references | Savulescu, J. (2012). Moral Enhancement, Freedom, And The God Machine. The Monist, 95(3), 399-421. doi:doi:doi:10.5840/monist201295321 | |
dc.relation.references | Schaper, M., Wöhlke, S., & Schicktanz , S. (2019). “I would rather have it done by a doctor”laypeople’s perceptions of direct-to-consumer genetic testing (DTC GT) and its ethical implications. Med Health Care and Philos, 22, 31-40. doi:https://doi.org/10.1007/s11019-018-9837-y | |
dc.relation.references | Schoenhagen, P., & Mehta, N. (2017). Big data, smart computer systems, and doctor-patient relationship. European heart journal, 38(7), 508–510. doi:https://doi.org/10.1093/eurheartj/ehw217 | |
dc.relation.references | Schwab, K. (26 de 02 de 2021). ‘This is bigger than just Timnit’: How Google tried to silence a critic and ignited a movement. Obtenido de Fast Company: https://www.fastcompany.com/90608471/timnit-gebru-google-ai-ethics-equitable-tech movemen | |
dc.relation.references | Semana. (25 de 11 de 2014). Así controlan las instituciones y empresas de salud a los médicos. Obtenido de https://www.semana.com/nacion/articulo/las-eps-controlan-los-medicos con-polemicos-metodos/409528-3/)./ | |
dc.relation.references | Semana. (05 de 09 de 2020). Colombia, cada vez más rezagada en inteligencia artificial. Semana. Obtenido de Obtenido de https://www.semana.com/tecnologia/articulo/estados-unidos-y china-los-primeros-en-inteligencia-artificial--noticias-hoy/701009 | |
dc.relation.references | Shew, A. (2020). Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine, 39(1), 40-85. doi:doi: 10.1109/MTS.2020.2967492 | |
dc.relation.references | Singhal, Karan, Azizi, Shekoofeh, Tu, Tao, Mahdavi, S. Sara, Wei, Jason, Chung, Hyung Won.Corrado, Greg S. (2023). Large language models encode clinical knowledge. Nature, 620, 172–180. doi:). https://doi.org/10.1038/s41586-023-06291-2 | |
dc.relation.references | Sisk, B. A, & Baker, J. N. (2018). Microethics of Communication-Hidden Roles of Bias and Heuristics in the Words We Choose. JAMA pediatrics, 172(12), 1115–1116. doi:https://doi.org/10.1001/jamapediatrics.2018.3111 | |
dc.relation.references | Sloterdijk, P. (2001). Normas sobre el parque humano. Una respuesta a la Carta sobre el humanismo de Heidegger. Madrid: Ediciones Siruela. | |
dc.relation.references | Sloterdijk, P. (2003). Esferas I. Burbujas. Microesferología. Ediciones Siruela. | |
dc.relation.references | Sloterdijk, P. (2012). Has de cambiar tu vida. Pre-Textos. | |
dc.relation.references | Smite, D., Brede Moe, N., Hildrum, J., Gonzalez-Huerta, J., & Mendez, D. (2023). Work-from home is here to stay: Call for flexibility in post-pandemic work policies. Journal of Systems and Software, 195(111552). doi:https://doi.org/10.1016/j.jss.2022.111552 | |
dc.relation.references | Smits, M., Ludden, G., Peters, R., Bredie, S., van Goor, H., & Paul Verbeek, P. (2022). Values that Matter: A New Method to Design and Assess Moral Mediation of Technology. Design issues, 38(1), 39-54. doi:https://doi.org/10.1162/desi_a_00669 | |
dc.relation.references | Solbakk, J. H. (2011). Vulnerabilidad: ¿un principio fútil o útil en la ética de la asistencia sanitaria? Revista Redbioética/UNESCO, 1(3), 89-101. | |
dc.relation.references | Srivastava, T., & Waghmare, L. (2020). Implications of Artificial Intelligence (AI) on Dynamics of Medical Education and Care: A Perspective. Journal of Clinical and Diagnostic Research, 14(3), JI01-JI02. doi:DOI: 10.7860/JCDR/2020/43293.13565 | |
dc.relation.references | Srnicek, N. (2018). Capitalismo De Plataformas. Caja Negra. | |
dc.relation.references | Stahl, B. (2021). AI Ecosystems for Human Flourishing: The Recommendations. SpringerBriefs in Research and Innovation Governance, 91–115. doi:doi:10.1007/978-3-030-69978-9_7 | |
dc.relation.references | Stahl, B. C. (2021). AI Ecosystems for Human Flourishing: The Recommendations. In: Artificial Intelligence for a Better Future. SpringerBriefs in Research and Innovation Governance. doi:https://doi.org/10.1007/978-3-030-69978-9_7 | |
dc.relation.references | Starke, G., van den Brule, R., Elger, B., & Haselager, P. (2022). Intentional machines: A defence of trust in medical artificial intelligence. Bioethics: Special Issue: Promises And Challenges Of Medical Ai, 36(2), 154-161. doi:https://doi.org/10.1111/bioe.12891 | |
dc.relation.references | Stempsey, W. E. (2006). Emerging Medical Technologies and Emerging Conceptions of Health. Theor Med Bioeth, 27, 227–243. doi:https://doi.org/10.1007/s11017-006-9003-z | |
dc.relation.references | Stiegler, B. (1994). La técnica y el tiempo I: el pecado de Epimeteo. Hondarribia: Argiraletxe Hiru. | |
dc.relation.references | Su, H., Lallo, A. D, Murphy, R. R, Taylor, R. H., Garibaldi, B. T, & Krieger, A. (2021). Physical human–robot interaction for clinical care in infectious environments. Nature Machine Intelligence, 3, 184-186. doi:doi:doi.org/10.1038/s42256-021-00324-z | |
dc.relation.references | Susan, S. (2020). What COVID-19 Reveals About Twenty-First Century Capitalism: Adversity and Opportunity. Development, 63, 150–156. doi:doi:https://doi.org/10.1057/s41301-020-00263-z | |
dc.relation.references | Takshi S. (2021). Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions. Journal of law and health, 34(2), 215–251 | |
dc.relation.references | Taylor, L. (2021). There Is an App for That: Technological Solutionism as COVID-19 Policy in the Global North. En E. Aarts, M. Fleuren, M. Sitskoorn, & T. Wilthagen, The New Common (págs. 209–215). Switzerland: Springer Nature. doi:doi:doi: 10.1007/978-3-030-65355-2_30 | |
dc.relation.references | Teepe, G., Glase, E., & Reips, U.-D. (07 de 04 de 2023). Increasing digitalization is associated with anxiety and depression: A Google Ngram analysis. doi:https://doi.org/10.1371/journal.pone.0284091 | |
dc.relation.references | ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9 | |
dc.relation.references | ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9 | |
dc.relation.references | ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9 | |
dc.relation.references | ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9 | |
dc.relation.references | ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9 | |
dc.relation.references | ten Have, K. (2016). Global Bioethics: An introduction (1 ed.). New York, Estados Unidos: Routledge. | |
dc.relation.references | The Economist. (20 de 06 de 2022). Alphabet is spending billions to become a force in health care. Obtenido de https://www.economist.com/business/2022/06/20/alphabet-is spending-billions-to-become-a-force-in-health-care | |
dc.relation.references | The Guardian. (08 de 03 de 2015). Silicon Valley is cool and powerful. But where are the women Obtenido de https://www.theguardian.com/technology/2015/mar/08/sexism silicon-valley-wome | |
dc.relation.references | The New York Times. (23 de 07 de 2022). Google Fires Engineer Who Claims Its A.I. Is Conscious. Obtenido de https://www.nytimes.com/2022/07/23/technology/google engineer-artificial-intelligence.html | |
dc.relation.references | Thomson, S., & C Ip , E. (2020). COVID-19 emergency measures and the impending authoritarian pandemic. Journal of Law and the Biosciences, 7(1 lsaa064). doi:doi:10.1093/jlb/lsaa064 | |
dc.relation.references | Tiku, N. (11 de 06 de 2022). The Google engineer who thinks the company’s AI has come to life. Obtenido de The Washington Post: https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake lemoine/ | |
dc.relation.references | Timmermans, S., & Berg, M. (2003). The practice of medical technology. Sociology of Health & Illness, 25(3), 97-114. doi:doi:10.1111/1467-9566.00342 | |
dc.relation.references | Toma, A., Senkaiahliyan, S., Lawler, P., Rubin, B., & Wang, B. (30 de 11 de 2023). Generative AI could revolutionize health care — but not if control is ceded to big tech. Obtenido de Natura: https://www.nature.com/articles/d41586-023-03803-y | |
dc.relation.references | Topol, E. (2014). The Patient Will See You Now: The Future of Medicine is in Your Hands. New York, Estados Unidos: Basic Books. | |
dc.relation.references | Topol, E. (2015). The Patient Will See You Now: The Future of Medicine is in Your Hands. J Clin Sleep Med, 11(6), 689–690. doi:doi: 10.5664/jcsm.4788 | |
dc.relation.references | Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books | |
dc.relation.references | Topol, E. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56. doi:doi:10.1038/s41591-018-0300-7 | |
dc.relation.references | Tozzo, P , Angiola, F, Gabbin, A, Politi, C, & Caenazzo, L. (2021). The difficult role of Artificial Intelligence in Medical Liability: to err is not only human. La Clinica terapeutica, 172(6), 527-528. doi:DOI: 10.7417/CT.2021.2372 | |
dc.relation.references | Tran, B.-X., Thu Vu, G., Hai Ha, G., Vuong , Q.-H., Tung Ho, M., Vuong, T.-T., M Ho, R. (2019). Global Evolution of Research in Artificial Intelligence in Health and Medicine: A Bibliometric Study. Journal of Clinical Medicine, 8(3(360)), 1-18. doi:https://doi.org/10.3390/jcm8030360 | |
dc.relation.references | Trist, E. L, & Bamforth, K. W. (1951). Human Relations. 4(1), 3-38. doi:https://doi org.ezproxy.unbosque.edu.co/10.1177/001872675100400101 | |
dc.relation.references | Truong, A. (2019). Are you ready to be diagnosed without a human doctor? A discussion about artificial intelligence, technology, and humanism in dermatology. Int J Womens Dermatol, 5(4), 267–268. doi:doi: 10.1016/j.ijwd.2019.05.001 | |
dc.relation.references | Tucker, G. (2015). Forming an ethical paradigm for morally sentient robots: Sentience is not necessary for evil. EEE International Symposium on Technology and Society, 1-5.doi: 10.1109/ISTAS.2015.7439420. | |
dc.relation.references | Ulrich, B. (1998). Politics of Risk Society. En J. Franklin, In The Politics of Risk Society. Polity Press. | |
dc.relation.references | Umbrello, S. (2022). The Role of Engineers in Harmonising Human Values for AI Systems Design. Journal of Responsible Technology, 10. doi:https://doi.org/10.1016/j.jrt.2022.100031 | |
dc.relation.references | UNESCO. (2005). Declaración Universal sobre Bioética y Derechos Hujmanos. Obtenido de UNESCO: http://portal.unesco.org/es/ev.php URL_ID=31058&URL_DO=DO_TOPIC&URL_SECTION=201.html | |
dc.relation.references | UNESCO. (2015). Parte 1: Programa Temáticoprograma De Educação Em Ética. Obtenido de https://unesdoc.unesco.org/ark:/48223/pf0000163613_por | |
dc.relation.references | UNESCO. (16 de 01 de 2023). Data solidarity: why sharing is not always caring . Obtenido de https://en.unesco.org/inclusivepolicylab/analytics/data-solidarity-why-sharing-not always-caring%C2%A0 | |
dc.relation.references | Universidad Externado De Colombia. (13 de 04 de 2018). Colombia le apuesta a la implementación de la inteligencia artificial. Obtenido de https://www.uexternado.edu.co/derecho/colombia-le-apuesta-la-implementacion-de-la inteligenciaartificial/#:~:text=Seg%C3%BAn%20explic%C3%B3%2C%20el%20Gobierno%20colombiano,ciento%20de%20las%20aplicaciones%20empresariales | |
dc.relation.references | University Of Denver. (s.f.). 18 Skills All Programmers Need to Have. Obtenido de https://bootcamp.du.edu/blog/programming-skills/ | |
dc.relation.references | Vakkuri, V., Kemell, K.-K., Jantunen, M., & Abrahamsson, P. (2020). “This is Just a Prototype”: How Ethics Are Ignored in Software Startup-Like Environments. En V. Stray, R. Hoda, M. Paasivaara, & P. Kruchten, Agile Processes in Software Engineering and Extreme Programming (Vol. 383, págs. 195–210). Cham: Spring. | |
dc.relation.references | van de Poel, I. (2021). Design for value change. Ethics Inf Technol, 23, 27–31. doi:https://doi.org/10.1007/s10676-018-9461-9 | |
dc.relation.references | Van Noorden, R., & Thompson, B. (2023). Audio long read: Medicine is plagued by untrustworthy clinical trials. How many studies are faked or flawed. doi: ¿https://doi.org/10.1038/d41586-023-02627-0 | |
dc.relation.references | van Weert , J. (2020). Facing frailty by effective digital and patient-provider communication.Patient Educ Couns, 103(3), 433-435. doi:doi: 10.1016/j.pec.2020.02.020. | |
dc.relation.references | Vegas-Motta, E. (2020). Hermenéutica: un concepto, múltiples visiones. Revista Estudios Culturales, 13(25), 121-130. | |
dc.relation.references | Verbeek, P.-P. (2005). What things do: Philosophical reflections on technology, agency, and design. Pennsylvania: Pennsylvania State Univeristy Press. | |
dc.relation.references | Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: The University of Chicago press. | |
dc.relation.references | Verbeek, P.-P. (2020). Politicizing Postphenomenology. hilosophy of Engineering and Technology. doi:https://doi.org/10.1007/978-3-030-35967-6_9 | |
dc.relation.references | Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician Humanism and Artificial Intelligence. JAMA, 319(1), 19–20. doi:10.1001/jama.2017.19198 | |
dc.relation.references | Vermeer, L., & Thomas , M. (2020). Pharmaceutical/high-tech alliances; transforming healthcare? Digitalization in the healthcare industry. Strategic Direction, 36(12), 43-46. doi:https://doi.org/10.1108/SD-06-2020-0113 | |
dc.relation.references | Viernes, F. (14 de 09 de 2021). Stop Saying ‘Data is the New Oil’. Obtenido de Medium: https://medium.com/geekculture/stop-saying-data-is-the-new-oil-a2422727218c | |
dc.relation.references | Vokinger, K. N., & Gasser, U. (2021). Regulating AI in medicine in the United States and Europe. National Library of Medicine, 3(9), 738–739. doi:doi: 10.1038/s42256-021-00386-z | |
dc.relation.references | Vos, R., & Willems, D. L. (2000). Technology in medicine: ontology, epistemology, ethics and social philosophy at the crossroads. Theoretical Medicine and Bioethics, 21, 1–7. | |
dc.relation.references | Wang, J, Yang, J, Zhang, H., Lu, H., Skreta, M., Husić, M., . . . Brudno, M. (2022). PhenoPad: Building AI enabled note-taking interfaces for patient encounters. npj Digit. Med, 5(12). doi:https://doi.org/10.1038/s41746-021-00555-9 | |
dc.relation.references | Wang, X., & Luan, W. (2022). Research progress on digital health literacy of older adults: A scoping review. Frontiers in Public Health, 10. doi: https://doi.org/10.3389/fpubh.2022.906089 | |
dc.relation.references | Wartman, S. A. (2021). Medicine, Machines, and Medical Education. Academic medicine : journal of the Association of American Medical Colleges, 96(7), 947–950. doi: https://doi.org/10.1097/ACM.0000000000004113 | |
dc.relation.references | Webster, P. (13 de 04 de 2023). Big tech companies invest billions in health research. Obtenido de Nature Medicine: https://www.nature.com/articles/s41591-023-02290-y | |
dc.relation.references | Webster, P. (2023). Big tech companies invest billions in health research. Nature Medicine, 29, 1034–1037. doi:https://doi.org/10.1038/s41591-023-02290-y | |
dc.relation.references | Weiner, M., & Biondich, P. (2006). The influence of information technology on patient physician relationships. Journal of general internal medicine, 21(Suppl 1), S35-9. doi:10.1111/j.1525-1497.2006.00307.x. | |
dc.relation.references | Weinstein, J. (2019). Artificial Intelligence: Have You Met Your New Friends; Siri, Cortona, Alexa, Dot, Spot, and Puck. Spine, 44(1), 1-4. doi:https://doi.org/10.1097/BRS.0000000000002913 | |
dc.relation.references | Wenk, H. (2020). Kommunikation in Zeiten künstlicher IntelligenzCommunication in the age of artificial intelligence. Gefässchirurgie, 25. doi:DOI:10.1007/s00772-020-00644-1 | |
dc.relation.references | Whitelaw, S., Mamas, M. A, Topol, E., & Spall, H. (2020). Applications of digital technology in COVID-19 pandemic planning and response. The Lancet Digital Health, 2(8), e435-e440. doi:doi:10.1016/S2589-7500(20)30142-4. | |
dc.relation.references | WHO. (09 de 09 de 2020). Tracking COVID-19: Contact Tracing in the Digital Age. Obtenido de World Healt Organization: https://www.who.int/news-room/feature stories/detail/tracking-covid-19-contact-tracing-in-the-digital age#:~:text=contact%20tracing%20is%20the%20process,the%20last%20two%20weeks | |
dc.relation.references | WHO. (2021). Ethics and governance of artificial intelligence for health. Obtenido de World Healt Organization: https://www.who.int/publications/i/item/9789240029200 | |
dc.relation.references | WHO. (28 de 06 de 2021). Ética y gobernanza de la inteligencia artificial para la salud. Obtenido de https://www.who.int/publications/i/item/9789240029200 | |
dc.relation.references | Wu, E., Wu, K, Daneshjou, R, Ouyang, D, Ho, D. E, & Zou, J. (2021). How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine, 27, 582-584. doi:doi:doi.org/10.1038/s41591-021-01312-x | |
dc.relation.references | Xolocotzi Yáñez, Á. (2020). La verdad del cuerpo. Heidegger y la ambigüedad de lo corporal. Universidad de Antioquia. DOI: https://doi.org/10.17533/udea.ef.n61a09 | |
dc.relation.references | Xu, W., & Ouyang, F. (2022). The application of AI technologies in STEM education: a systematic review from 2011 to 2021. International Journal of STEM Education, 9(59). doi:https://doi.org/10.1186/s40594-022-00377-5 | |
dc.relation.references | Yee, V., Bajaj, S., & Cody Stanford, F. (2022). Paradox of telemedicine: building or neglecting trust and equity. The Lancet Digital Healt, 4(7), E480-E481. doi:DOI:https://doi.org/10.1016/S2589-7500(22)00100-5 | |
dc.relation.references | Zang, P. (2010). Advanced Industrial Control Technology. William Andrew Publishing. | |
dc.relation.references | Zwart, H. (2008). Challenges of Macro-ethics: Bioethics and the Transformation of Knowledge Production. Bioethical Inquiry, 5, 283–293.doi:https://doi.org/10.1007/s11673-008-9110-9 | |
dc.rights.accessrights | info:eu-repo/semantics/closedAccess | |
dc.rights.accessrights | http://purl.org/coar/access_right/c_14cb | |
dc.rights.local | Acceso cerrado | spa |
dc.subject | Inteligencia Artificial | |
dc.subject | Relación médico-paciente | |
dc.subject | Salud Digital | |
dc.subject | Interacción humano-máquina | |
dc.subject | Vulnerabilidad | |
dc.subject | Bioética | |
dc.subject | Ética médica | |
dc.subject.keywords | Artificial Intelligence | |
dc.subject.keywords | Doctor-Patient Relationship | |
dc.subject.keywords | Digital Health | |
dc.subject.keywords | Human-Machine Interaction | |
dc.subject.keywords | Vulnerability | |
dc.subject.keywords | Bioethics | |
dc.subject.keywords | Medical Ethics | |
dc.subject.nlm | WB60 | |
dc.title | La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial | |
dc.title.translated | The anthropotechnical vulnerability: a proposal for interpreting the transformations of the doctor-patient relationship in the era of artificial intelligence | |
dc.type.coar | https://purl.org/coar/resource_type/c_db06 | |
dc.type.coarversion | https://purl.org/coar/version/c_ab4af688f83e57aa | |
dc.type.driver | info:eu-repo/semantics/doctoralThesis | |
dc.type.hasversion | info:eu-repo/semantics/acceptedVersion | |
dc.type.local | Tesis/Trabajo de grado - Monografía - Doctorado | spa |
Archivos
Bloque original
1 - 1 de 1
No hay miniatura disponible
- Nombre:
- Trabajo de Grado.pdf
- Tamaño:
- 8.29 MB
- Formato:
- Adobe Portable Document Format