![]() However, word polysemy is a semantic phenomenon that trespasses linguistic barriers, as verified in several human language-related instances. Out of all the natural communication systems, human language is understood to have been built on multiple combinatorial and compositional settings, which enables rearranging open-ended sound-based elements into close-ended linguistic structures, such as morphemes and words. It is expected to corroborate future research on local-based ethical AI approaches designed within a specific culture’s values to mitigate and avoid social vulnerability and violence. The efforts of global, universal, and unilateral influence of Western culture’s values on AI ethical regulation is counteracted with a reflection on decentralized bottom-up approaches to culture by means of applied ethnographic research to bring the potential of local culture into AI policy making. ![]() Instances of recent technology-based violence discriminations, such as misogyny, religious intolerance, racism, xenophobia, and transphobia, are provided and seen through the lens of current AI development and transphobia. Theoretical reflections on post-modern panoptic frameworks, such as synoptic and banoptic devices, were carried out to assess the impact of emerging surveillance technologies as social control strategies for the reinforced marginalization of categories of exclusion. ![]() The issue of ethical AI regulation is therefore grounded in questioning to what extent Western culture values and practices are still consistent in the standardized and global deployment of social and ethical policies addressed to cultures that may hold distinctive cultural perceptions and values. Incapable of broadly embracing all cultural and social developments throughout history, such norms refer to social regulation and standardization that turn out to be exclusionary for the existence of distinctive individuals whose identities don’t conform to such moral standards. Currently, AI technologies are tooled to perform a security-based society regulation, potentially deploying gathered data as threats against social categories that deviate from moral-based norms. Before late modernity, disciplinary discursive power was an addressed tool to perform social control in Western societies by institutions such the Roman Catholic Church. This paper’s objective is to address the use of AI and Big Data as social surveillance systems tools for the establishment of more sophisticated strategies of social control. From an eventual possibility to an invisible probability, Big Data may be used from devising purchase preference profiles to political bias in election periods and to reinforce bigotry against social minorities, especially transphobia. ![]() Emerging applications of Artificial Intelligence (AI) tools corroborate an intensified and optimized collection of personal data, either objective ones granted by individuals themselves or subject ones silently taken by algorithmic learning. In late modernity societies, the double meaning of ‘monitoring’ is not a coincidence insofar, it can be a subject or an object attribution, thus suggesting a phenomenological intersection between surveillance studies and technological deployments in mass media.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |