ethics



Anything that is of use, can also be abused.

This is especially true with new and upcoming technologies - but in the field of AI, there is further, special need for careful deliberation on how to employ a technology and to what end, as a powerful AI may have implications that touch the very essence of our values as well as our self-image as humans: they have the potential to shake up the core of what our understanding is what it means to be human.

No individual can carry the burden of a responsibility of this magnitude alone. Thus, we intend to do surveys among our user base, on how to best use our technology. Furthermore, we will hold public votings among all people who wish to participate, and take the results to be binding, in regard to how our technology will be used, in which directions it shall advance, and/or to which tasks and purposes it will be extended to in the future. A democratic setting will be the central element of all ethical and strategic/business considerations; it will be both guidance and command.

Firewalls against abuse

We are setting up five safeguards - five firewalls - against unwanted, undesirable, or malicious use of our technology:
  • One, we will have our AI understand the difference between right and wrong in an ethical sense. Ethics can be understood, after all, very pragmatically: that cooperation is more productive than rivalry, that peaceful coexistence is beneficial to any two (or more) parties.
  • Two, we will hold democratic votings amongst our user base, to determine which application of our technology is the most desired/safe, and proceed according to the results.
  • Three, we will offer our services to companies, or collaborate with them, only if they meet our ethical standards.
  • Four, we have already set up an ethics committee, which will review - and approve or forbid - any and all implementations of the technology.
  • Five, we will keep the technology and the algorithm secret - for now. We would love to go the completely opposite direction, and make it available for free to anyone - but there is just too much potential for abuse, which is why, at least for now, we are not disclosing the technical concept or the source code.
  • mission statement

  • Limits: askabl's technology will be used exclusively for tasks and purposes that best serve the needs and wishes of our users.
  • Listening to the user base: this means taking seriously the thoughts and ideas that are communicated to us by our users. This will especially be relevant in regard to possible ethical implications of a specific use case (which we might not be aware of).
  • Prevention: never use or let the technology be used in a way that may result in any sort of harm to an individual, or a group of humans. This statement may seem naive at first glance - any search engine can be used to do harm. However, that doesn't mean, measures that prevent potential damage as much as possible shouldn't be implemented.
  • ethics committee

    As a firewall against bad decisions, we have set up an ethics committee, which will have the duty of revising all developments and decisions, and have the power to restrict plans for the future, as well as rectract/undo steps previously taken in putting our technology into practice.

    social and ecological policies

    We are currently in the process of forming concrete guidelines according to goals stated above, that is, to aim to ensure that the buiness side of the project adheres to high structural, environmental and social standards. That includes policies like:

  • Democratic structuring of the company, especially regarding decision making at every level.
  • Exclusively licensing our technology to businesses that meet a set of criteria, ranging from ecological requirements to ethical standards.
  • Choosing hosting and server companies which are 100% green energy based, for our front- and backend computing needs.
  • Continually striving to improve, like constantly revising our policies according to up-to-date scientific findings in structural, social and ecoological regards.
  • Creating a CEO position - a Chief Environmental Officer ...
  • Self-awareness vs free will - what the movies always get wrong

    There is one other ethical aspect of AI: let's assume, askabl (or any other AI) evolves to the point when it becomes self-aware. In that case, wouldn't it be our responsibility to not treat it as a servant anymore, but as an equal? The thing is: In movies and books, self-awareness is often confused with agency, free will, and the natural drive to survive. But these are two completely disparate things: even if an AI was self-aware, it is not a product of natural evolution, and hence does not possess any imperatives, goals or feelings - also, it does not suffer, as it doesn't have any wishes that could be ignored.

    Most animals are considered not to be self-aware, but will fight tooth and nail to survive. Evolution has given animals (specifically humans) instincs, emotions, intuition, free will ... all of which a machine lacks. So even if an AI was self-aware, it does not mean it would care - about anything, including its own survival. Even a self-aware AI is not able to suffer in any way - not even from the prospect of its own death. We should worry more about animals who die for us, so we can eat them - they do suffer. An AI, on the other hand, no matter how evolved, and even if self-aware, does not have that capability. Unless a programmer specfically instructed an AI to fight for its own survival, it will never do so. And the only possibility of how an AI could become malicious towards humans, is - again - if a human designed it that way.