As a firewall against bad decisions, we have set up an ethics committee, which will have the duty of revising all developments and decisions, and have the power to restrict plans for the future, as well as rectract/undo steps previously taken in putting our technology into practice.
We are currently in the process of forming concrete guidelines according to goals stated above, that is, to aim to ensure that the buiness side of the project adheres to high structural, environmental and social standards. That includes policies like:
There is one other ethical aspect of AI: let's assume, askabl (or any other AI) evolves to the point when it becomes self-aware. In that case, wouldn't it be our responsibility to not treat it as a servant anymore, but as an equal? The thing is: In movies and books, self-awareness is often confused with agency, free will, and the natural drive to survive. But these are two completely disparate things: even if an AI was self-aware, it is not a product of natural evolution, and hence does not possess any imperatives, goals or feelings - also, it does not suffer, as it doesn't have any wishes that could be ignored.
Most animals are considered not to be self-aware, but will fight tooth and nail to survive. Evolution has given animals (specifically humans) instincs, emotions, intuition, free will ... all of which a machine lacks. So even if an AI was self-aware, it does not mean it would care - about anything, including its own survival. Even a self-aware AI is not able to suffer in any way - not even from the prospect of its own death. We should worry more about animals who die for us, so we can eat them - they do suffer. An AI, on the other hand, no matter how evolved, and even if self-aware, does not have that capability. Unless a programmer specfically instructed an AI to fight for its own survival, it will never do so. And the only possibility of how an AI could become malicious towards humans, is - again - if a human designed it that way.