Discussion: View Thread

*Deadline extension* Call for Papers: 2020 ISBEE World Congress Track: "Artificial Intelligence in business"

  • 1.  *Deadline extension* Call for Papers: 2020 ISBEE World Congress Track: "Artificial Intelligence in business"

    Posted 12-05-2019 16:53

    Call for Papers – ISBEE track on Artificial Intelligence (AI) in business

     

    Organizers 

    Tae Wan Kim (Tepper School of Business, Carnegie Mellon University)

    Ignacio Ferrero (School of Economics and Business, University of Navarra)

    Alejo José G. Sison (School of Economics and Business, University of Navarra)

     

    Business enterprises progressively exploit artificial intelligence techniques to make significant decisions for humans. YouTube, Amazon, Google, and Facebook customize what users see. Uber and Lyft algorithmically match passengers with drivers and set prices. Tesla's Advanced driver-assistance systems aid with steering and braking for drivers. Although each of these examples comprises its own complicated technology, they share a core: a data-trained set of decision rules (often called "machine learning'') that implement a decision with little or no human intermediation. Such features raise various ethical issues and managerial responsibilities. Amazon used AI to recruit new employees and shut it down after it recognized that the machine showed bias against women. Microsoft had to stop its first AI-based twitter-bot, Tay, immediately after the chatbot tweeted racist and misogynist tweets. Tesla vehicles have caused deaths but the black box technology does not allow us to understand whether it was an accident or an incident. This call seeks papers that examine ethical issues in using AI techniques, broadly defined, in business. 

     

    Possible Themes and Topics 

    Value alignment

    Artificial intelligence (AI) is an attempt to imitate human intelligence. AI has imitated much of human intelligence, especially, calculative and strategic intelligence (e.g., AlphaGo's victory over human champions). As highly developed AI technologies are rapidly adopted for automating decisions, societal worries about the compatibility of AI and human values are growing. As a response, researchers have examined how to imitate moral intelligence as well as calculative and strategic intelligence. Such attempts are lumped under the broader term, "value alignment." Given that a unique element of human intelligence is moral intelligence---a capability to make an ethical decision--the attempt to imitate moral promises to bring AI to another, higher level. 

     

    Algorithmic fairness 

    Intense interest in fairness in AI systems has led to a proliferation of statistical fairness metrics and remedies for discrimination and bias.[1]  By one account, there are 20-plus concepts of fairness in the literature. Statistical measures are a legitimate and useful approach, but the many notions of statistical fairness are often mutually contradictory and typically lack clear normative justification.  Ethicists can significantly contribute to this field by critically engaging with the literature and developing normatively rigorous frameworks. 

     

    Autonomy

    As companies and governments race to develop autonomous systems, such as self-driving vehicles, robotic caregivers and autonomous weapons, we worry about losing control of our machines. We imagine an autonomous agent to be one that makes its own decisions, free of external constraints, including ethical constraints.  Consequently, we fear that autonomous machines will become oblivious to our interests. But, is such a machine really autonomous? Alan Donagan (1994) once pointed out, "The notion that an autonomous being is one having the power to do as it likes is a vulgarity" (225). What is its connection to ethics? Which definition of autonomy is most appropriate to design autonomous machines? 

     

    Humanizing the workplace in the age of AI

    How can the goal of humanizing business be reconciled with the growing presence of artificial intelligence in the workplace? How should human workers relate to AI-powered robots? Is it perverse for humans to develop affective relationships with humanoids? Is it even possible to humanize business-without-humans in the age of AI?  If residents of a nursing home form an emotional relationship with a robot that manages Bingo games, one might see this as worrisome or a natural development. If a human boss converses with a smart humanoid one moment and kicks it the next, one might say, "That's fine, it's like kicking a chair," or find it morally problematic.  If a company's robots form a stakeholder group and demand that their interests be considered, one might dismiss them or take them seriously. A similar dilemma arises if an AI system demands membership on the board of directors.  If teaching good behavior to machines becomes easier than teaching it to humans, one might strive to include more machines in the workplace, or see this as a dehumanizing move.  

     

    Explainable AI

    Classification and recommendation AIs or algorithms provide predictions to an expert user, such as investment banker ("Buy"), a medical doctor ("Cancer"), or lawyer ("Guilty"). These AI systems give the most probable classification, but rarely with a rationale or explanation. This lack of justification by the algorithm can lead to a significant loss of trust by the expert users, as well as those who are impacted by the subsequent actions (e.g., investors, clients, S.E.C., etc.). The critical need for explanations and justifications by AI systems has led to calls for algorithmic transparency, including the EU General Data Protection Regulation (GDPR) that requires many companies to provide an ex post "meaningful" explanation to involved parties (e.g., users, customers, or employees).  In response, a growing number of researchers are developing explainable AI (XAI) and different researchers use different ideas of what an explanation is. For example, 11 U.S. research groups, funded by DARPA, are currently developing XAI in different manners. This raises a normative question: How can we know which model of XAI is good or better/worse than others? To answer this, we need a "goodness" criteria. 

     

    Moral status of AI

    It is undeniable that AI is a moral subject, but is it a moral agent? Traditionally, moral agency has been attributed to living beings endowed with a certain degree of rationality and freedom. How applicable is this to AI? To what extent would the recognition of AI agency warrant the reframing of ethics? Are there parallels between AI agency and corporate agency from the legal, moral, and psychological perspectives?

     

    Technological unemployment

    Robots are coming to take over human jobs.  The Economic Report of the President to the Congress (2018) predicts the median probability of robots taking over the lowest-paid jobs in the coming decades is 83 percent and of taking American jobs overall is 62 percent. It has been argued that while technological innovation will create more jobs, most of these jobs will be taken by machines because they can learn skills for newly created jobs way faster than humans. Public discussions about future of work tend to focus on the economic sustenance of displaced workers. Nonetheless, it is another important question whether the future with basic income or something similar would be a fulfilling structure for those who lack work opportunities. 

     

    References

    Anderson, M. and Anderson,S. L. (eds.). (2011). Machine ethics. Cambridge University Press.

     

    Awad, E., Dsouza, S. Kim, R. Schulz, J., Henrich, J., Shariff, A, & Rahwan, I. (2018). "The moral machine experiment". Nature,563 (7729), 59-64.

     

    Bhargava, V., and Kim, T. W..  (2017). "Autonomous vehicles and moral uncertainty." In Robot Ethics2.0. Oxford University Press.

     

    Bostrom, N. & Yudkovski, E. (2014). "The ethics of artificial intelligence". In Frankish, Keith

    (ed.), The Cambridge Handbook of Artificial Intelligence(pp. 316-334). Cambridge University Press.

     

    Brynjolfsson, E. & McAfee, A. (2016), The second machine age: Work, progress, and prosperity in a time of brilliant technologies. New York: W. W. Norton.

     

    Calo, Ryan, Michael A Froomkin, and Ian Kerr (eds.), 2016, Robot Law. Cheltenham: Edward Elgar.

     

    Danaher, John, and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications. Boston, Mass.: MIT Press.

     

    Donagan, A. (1984). Justifying Legal Practice in the Adversary System.  Rowman.

     

    Gunkel, David J, 2018b, Robot Rights. Boston, Mass.: MIT Press.

     

    Gunning, David, 2018. "Explainable Artificial Intelligence (Xai)". Defense Advanced Research Projects Agency. https://www.darpa.mil/program/explainable-artificial-intelligence

     

    Hooker, J. N.  and Kim, T. W.  (2018). "Toward non-intuition-based machine and artificial intelligence ethics: A deontological approach based on modal logic." In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society(AIES '18). ACM, New York, NY, USA, 130-136. DOI: https://doi.org/10.1145/3278721.3278753

     

    Hooker, J. N.  and Kim, T. W.  (2019). "Ethical implications of the fourth industrial revolution for business and society." In Business Ethics. Emerald Publishing Limited.

     

    Hooker, J. N.  and Kim, T. W. (Forthcoming). "Truly autonomous machines are ethical." Artificial Intelligence Magazine. 

     

    Kim, T. W. (2018). "Explainable artificial intelligence (XAI), the goodness criteria and the grasp-ability test." arXiv preprint arXiv:1810.09598

     

    Kim, T. W., Donaldson, T., and Hooker, J. (2018). "Mimetic vs anchored value alignment in artificial intelligence." arXiv preprint arXiv:1810.11116.

     

    Kim, T. W., Donaldson, T., and Hooker, J. (2019). "Grounding value alignment with ethical principles."arXiv preprint arXiv:1907.05447.

     

    Kim, T. W., and Routledge, B. R. (2018, September). "Informational privacy, a right to explanation, and interpretable AI." In 2018 IEEE Symposium on Privacy-Aware Computing (PAC)(pp. 64-74). IEEE.

     

    Kim, T. W., and Mejia, S. (2019, October). "From artificial intelligence to artificial wisdom: What Socrates teaches us." IEEE Computer.

     

    Kim, T. W., & Scheller-Wolf, A. (2019). "Technological unemployment, meaning in life, purpose of business, and the future of stakeholders." Journal of Business Ethics, 1-19.

     

    Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. New York: Oxford University Press.

     

    Martin, K.   Ethical Implications And Accountability Of Algorithms. Journal of Business Ethics

     

    O'Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Largo, ML: Crown.

     

    Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf.

     

    Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide for a Future Worth Wanting. New York: Oxford University Press.

     

    Wachter, Sandra, and Brent Daniel Mittelstadt, forthcoming, "A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI". Columbia Business Law Review, (September 13, 2018). 

     

    Wachter, Sandra, Brent Daniel Mittelstadt, and Luciano Floridi, 2017, "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation". International Data Privacy Law. http://dx.doi.org/10.2139/ssrn.2903469

     

    Wachter, Sandra, Brent Daniel Mittelstadt, and Chris Russell, 2018, "Counterfactual Explanations without Opening the Black Box: Automated Decisions and the Gdpr". Harvard Journal of Law & Technology, 31 (2). http://dx.doi.org/10.2139/ssrn.3063289

     

    Please submit your proposals to aiinbusiness.sbee2020@gmail.comAND upload them to the ISBEE website (www.isbee.org) indicating "Track_Wan_Ferrero_Sison_22".

     

    The following details apply for both abstracts and full papers:

     

    Electronic submission only. The text of all abstracts/papers must be double spaced. Abstracts should be 500-1,500 words. Full papers should not exceed 6,000 words. Abstracts/papers must have all these elements in order: Title / Summary / Keyword list / Body / References / Endnotes (if any). Please ensure you carefully adhere to all the guidelines for authors of Business and Society.

     

    The accepted abstracts may be available on the Congress website, which can be accessed through ISBEE website, www.isbee.org.

     

    A selection of papers presented at the Congress may be published in ISBEE-branded publications (an edited volume in the Humanistic Management Series of Routledge and a journal special edition).

     

    Timeline

    • December 20, 2019 (extended deadline): Deadline for abstract submissions (500-1500 words)
    • January 15, 2020: Confirmation of acceptance to present at the ISBEE World Congress
    • May 15, 2020: Full paper submission
    • July 15-18, 2020: Seventh ISBEE World Congress

    [1]See, for instance, papers published by ACM FAT (Fairness, Accountability, and Transparency), https://fatconference.org



    ------------------------------
    Dulce M. Redin
    Universidad de Navarra
    Pamplona
    ------------------------------