Good points all! Just on that last post, it kind of made me think that actually, a lot of the assumptions people make about humans and human consciousness/intelligence are quite wrong. In fact, the consistency and cohesiveness of thought and behaviour we seem to like assuming we each have is quite robotic in nature! (Perhaps our consciousness, thought of in those quite linear/structured/mechanical way, is a fair definition of machine consciousness after all. Or maybe it's because all the articles were written by men?)
What I see of humanity is a (sometimes willing, sometimes reluctant) fluidity of self. Our thoughts and feelings about things change constantly in big and small ways. It's not just about changing in reaction to stimuli or situation, either. We have whims. They are neurobiological, probably, but still: they are semi-random even, or driven by personalities (ie. unpredictable, or predictable within a set of observed but undefined tendencies). Often, they are illogical. (And sexy. Take note, sex robots.)
To bring that back to practical ways of how we might define and legislate for how we treat animals or robots – and to draw on @wow-oh-wow – we very much do combine different ways of thinking about animals. Most of us probably understand that on a creature (similarity) level, it's cruel and unjust to enslave and eat animals. Some of us do anyway – like we often do things we conceive as 'morally' wrong – because we have inherent drives and physical/genetic desires, but also because we have the aforementioned dominion. We often understand that dominion as an expression of good fortune or our social ability to dominate. (Some of us consider that 'natural'.)
And that goes back to social values and behaviours as expressed in one of the articles (about people being instructed to beat up cute robots
). I don't really care for the discussion about our sympathetic responses to human-like reactions to 'suffering' etc – of course we react that way. But I think our laws will always be inconsistent (as per the other essay's mention of laws in NY State re animal treatment depending on their use/proximity to us). It's naïve to think we'll reform human society to create consistency (although maybe the robots will do that for us).
While we may come to regard them as superior (as they already are in so many ways), I don't think we'll ever treat robots as equal. Which I think means we mostly lean toward 'similarity' when assessing qualities of consciousness. For one thing, robots are likely to be highly superior to us physically, and in many mental/calculative/retentive functions. For another, we create them. Even if they improve and replicate themselves, we created the conditions and the ability.
But, while we might observe linguistic semblance of consciousness, I don't think we'll see real depth of consciousness (and, heck, suffering) built into robots (or machine-learned) for some time. (As Jeff Sparrow said, we'll struggle to comprehend them because we struggle to comprehend nature and ourselves and each other.) That means I think the urgency of legislating rests on how they are allowed to be programmed to regard us, rather than vice versa.
And that, too, will recuse us of any sense that we have to fully grapple with our own chaotic bodies, minds and selves.
—
Sorry for the long rant. And: I have no idea what the hell phenomenology means and Wikipedia is useless so please don't torture me with it agh!