ncy intuitions, because it is better to misattribute the presence of a lion behind a bush than to believe that it was the wind when in fact is a predator behind the bush because it can eats you. This mechanism is understood to play an important role in the anthropomorphization of objects and artifacts such as robots [33].

From cognitive science and its multidisciplinary approach that includes philosophy to understand the ability of human beings to attribute mental states (social cognition), we must create the necessary methods to determine the characteristics of the artifacts that interact with us in different environments (work, educational, domestic, etc.). These characteristics are those that evolution has fixed as relevant cues to enable social communication, such as, for example, gaze direction, saccades, head–eye coordination, facial expressions, non-verbal behavior, gestures, and even voice.

To trust artificial systems, including robots, we must not only design them as intentional systems and exploit our natural tendency in social cognition to attribute intentions, but also consider their appearance and characteristics as relevant cues to enable social communication [1].

Appearance, the process of anthropomorphization, is key. But we cannot forget the «uncanny/uncanny valley» phenomenon. If artificial systems turn out to be excessively similar, but not identical to us, they can cause rejection and repulsion. However, research in social robotics by Ayanna Howard and colleagues has found that children change their behavior to please and satisfy a robot if it disagrees. These results have interesting ethical implications [31]. They found, through a series of experiments, that it takes time to question a robot’s authority.

However, I am not sure that respecting authority is similar to the notion of trust. To achieve a natural disposition or inclination to trust artificial systems, anthropomorphic traits must be implemented.

From these features—gaze direction, saccades, head–eye coordination, facial expressions, non-verbal behavior, gestures, voice, gender—and their progressive implementation in robotic systems, I am confident that we will become more confident in interacting with artificial systems that will progressively share more and more space with us in multiple contexts.

Conclusion

The purpose of this study has been to understand how the general public perceive artificial systems and in particular how they attribute mind and free will. To this end, I have developed a scale to measure a series of mental attributes to see if people consider them important for a mind to be considered “mind”; two tests of mind perception (for humans and for machines); and, finally, two tests of free will (for humans and for machines). In both the mind perception test and the free will attribution test, people consider artificial systems to have neither mind nor free will.

However, I must acknowledge the limitations that come with the use of a convenience sample. Since the sample was not selected using random sampling, it may not fully represent the broader population. This fact could potentially introduce a selection bias, limiting the generalizability of the findings. This study involved a relatively small sample size of only 25 subjects, which may further constrain the robustness of my statistical inferences.

Therefore, while the findings of my study offer insights into the topic at hand, they should be interpreted with caution due to the potential biases inherent in the use of convenience sample. To better confirm and extend the applicability of my findings, I will try to carry out future studies to consider employing a more rigorous sampling method, and preferably, to use a larger and more diverse sample size. It is my hope that future studies would take up my test and conduct further evaluations to confirm the robustness and validity of my findings.