top of page

More Art Than Science ...


iRobot, Sonny, Drawing

"This is my dream. You're right, detective, I cannot create a great work of art. This is the place where robots meet. Look, you can see them here as slaves to logic ... and this man on the hill comes to free them." ~ iRobot.


Whenever I get a Twitter notification saying someone followed me, I always like to hover over their icon or even check out their profile page just to get a sense of the type of people who follow me and why. Sometimes, the results really surprise me. I find people whose express ideologies are radically different from my own and yet they chose to follow me all the same. That suggests to me my tactic of concealing my identity is working. Though, that could just be confirmation bias.


I sometimes wonder, in reading people's bios, what their motivations are. Why they chose to say certain things about themselves while concealing others. I sometimes wonder what it was that prompted them to hit that follow button. Are they following me solely to promote themselves or did they like something I said? Perhaps both in some cases? If I had enough data of that kind, in theory, I could produce more and better content after performing a factor analysis and running some AB-testing.


Sadly, because I'm not a mindreader, I can't penetrate their little black boxes anymore than they can see through mine; and while that's probably for the better, it also comes with major downsides.


For now, I'm stuck with guessing and soliciting more feedback.


Probably because my brand is dark transhumanism, and I post a lot of stuff about AI from time to time, I've been seeing an uptick in followers grouped under the umbrella of what could loosely be categorized as mind and cognitive scientists. People dealing in the fields of biological and artificial neuroscience, of psychology and AI. Again I find myself wondering what I said that caught their attention.


It's both fascinating and humbling because these people probably know way more than me about the subject of transhumanism.


One such person retweeted an interesting article on the subject of Deep Learning and the black box surrounding the so-called motivations of AI's in processing decisions. There are obvious parallels between the ways in which we humans learn and the way in which scientists have taught machines to learn; and in both cases, we're dealing with a student we don't fully understand. The problem of ethics, for instance, is something we haven't quite yet figured out ourselves - at least not beyond some broad brush strokes - and yet we'll soon be tasked with programming robots that take into consideration this most complex, nuanced of topics in order to hedge our own survival.

Maybe once the AI's can learn from books and YouTube videos, we can just plop them down with a copy of UPB and that'll take care of it:

Sadly, I'm not sufficiently fluent in coding or linear algebra to be able to write my own Deep Learning algorithms (yet), but as a trained artist and a transhumanist, the Science article about black box AI was still really interesting. It got me thinking, particularly in terms of providing feedback by regular humans on the way we, ourselves, learn and the rationalizations we make for why we do what we do.


Riedl calls his approach “rationalization,” which he designed to help everyday users understand the robots that will soon be helping around the house and driving our cars. “If we can’t ask a question about why they do something and get a reasonable response back, people will just put it back on the shelf,” Riedl says. But those explanations, however soothing, prompt another question, he adds: “How wrong can the rationalizations be before people lose trust?”



As a student of the Moist Robot school of philosophy, I smiled broadly upon reading this, knowing that the problem isn't much different when it comes to trusting fellow humans. Trust issues stem from lack of true and complete knowledge about who or what we're dealing with, combined with the accrued dataset of all the ways in which such entities have failed us in the past.


To wit: mistrust is bred of ignorance and pain.


The same is true when it comes to the underlying mechanisms driving our fear of robots. No one mistrusts their smartphone on an ethical level because they know it's physically incapable of causing that level of harm, unlike an NS-5 or T-1000 or something of that nature. Thus, there is good reason for even a libertarian like myself to support:


... a directive from the European Union [in which] companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic.



As a libertarian, I'm against things like gun registration. Guns don't kill people, people kill people; and people are generally restrained by an objective system of ethics ... but this is a little bit different. In this case, the gun could potentially go about killing people on its own without being tethered to such a code of human conduct. I don't think I have to tell you how and why that's highly problematic.

Joker Not a Monster

Some AI's just wanna watch the world burn.

As I said, there is good reason to be concerned about this stuff right now. From the aforementioned Futurism article:

Vassar sees this lack of early attention, and not AI itself, as the biggest threat to humanity. He argues that we need to find a way to promote “analytically sound” discoveries from those who lack the prestige currently necessary for ideas to be heard.


I'm confident the human race is sufficiently motivated by fear to implement the necessary protocols, such that it won't ultimately be as bad as we imagine. It's the Adams Law for Slow-Moving Disasters at work.


Still, for now, let's not ease off that particular throttle just yet.


Caruana’s GAMs are not as good as AIs at handling certain types of messy data, such as images or sounds, on which some neural nets thrive. But for any data that would fit in the rows and columns of a spreadsheet, such as hospital records, the model can work well.



Here's where I reveal my ignorance as a subject-matter expert and a bias towards art over science, as my first thought upon reading this was: "Well, why not just arrange each pixel as a cell in a spreadsheet with the metadata on hue, saturation, and value, for itself and its adjacent neighbors?"


That's how procedural tile mapping works, after all.


As I said, the people who work on this sort of thing are probably way smarter than me and I'm sure they've thought of it already (if not, you're welcome). In all likelihood, they do something like this or similar enough in mapping out images, which the computer then scans line by line in sorting tests or facial recognition software, comparing each cell to its neighbors and keeping track of general trends before reaching a conclusion.


That's how we humans do it as well, through pattern recognition. Just that our visual cortexes are far more advanced and well-entrained than any robot to date.


Scroll further down the article to where they talk about reconstructing photos, filling in holes, and I'm reminded of my first reaction to the Background Eraser tool in Photoshop when I first learned about it.


My reaction went something like this:

What Sorcery Is This?

Sort of the same reaction I get from non-artists when I draw something beyond stick-figures.

Since then, I've learned more about both Photoshop and AI, how the process works, how it samples and interpolates based on tolerance settings, so it seems a lot less magical. Still, such tools can't quite do what you need to in a single pass. We're still a long ways away from that degree of competence, and so a human hand and a human eye are necessary to produce art and design.


Even Google's Deep Dream, as advanced as it is, is still only capable of creating little more than trippy, surrealist doge fractals out of stock photos, as if it were Salvador Dali on shrooms.


Like Sonny, it can't create a great work of art either.


Part of the reason for this is that art is ... I don't wanna say hard, because anyone can learn it if taught to them in the right way, but hard for a robot lacking our complex visual cortex, pattern recognition, and our ability to feel and translate experiences into abstract representations and expressions. Like ethics, aesthetics is something we humans haven't fully solved down to the last ioto.


Though again, once robots can learn from videos, we can just sign them up for lessons at the Barnstone Studios and fix that:

Returning to the iterative process of an AI filling in holes in images, I will admit it got fairly close after a number of iterations, but it's still incredibly uncanny even to the untrained eye. We can see seems and artifacts that a professionally trained human would be able to hide.


Part of the trade secret among artists - part of the lie we tell - is that we never completely remove the seems or the lines or the edges in a piece because what we're creating is inherently not reality, but a mere abstraction thereof. At best, we can just get really really good at tricking you.


That's an iterative process that follows certain rules, which means a machine can, in theory, learn how to do it if we Moist Robots can. We just haven't figured out how to put it into practice yet.


Here's where someone like me can prove useful to the world of AI programmers. As an artist and a Moist Robot, I can tell you my motivations. I can explain to you the rationale for making the decisions that I do when it comes to picking a luminosity or a line weight. I can take the same abstract aesthetic guidelines and parse them in a way that you, the programmer, can understand, which you can then in turn use to design tests for your teacher bots to train your algorithmic students.


Not just me, of course, but artists in general - though I wouldn't mind working on such a project.


The rules of image-making the AI uses are probably more a guess-and-check applied to very narrow sets of references rather than broad-sweeping design principles as used by trained professionals.


To go from merely filling holes to creating great works of art, you would have to have a different set of AB-tests that teach the robot: "This is balanced, this is not; this is harmony, this is not; this is designed on a sacred geometric gamut, this is not," ad nauseum.


Are the bots being trained in that?


Somehow I doubt it, though again I reveal my ignorance on what's actually happening in the world of AI development. It could be the experts have this covered, or it could just as well be they'd never even stopped to consider it. Either way, the robots will only do what their human programmers teach them (and their teacher bots) to do. So unless their coders know the rules of aesthetics and design, and or moral philosophy and ethics, it seems highly dubious that their creations would be learning them.


There are certainly times when the artist has to learn a bit of coding, such as in making a video game or a website. Likewise, I think it would be helpful for AI developers and programmers to cross-train in aesthetics and philosophy, or to at least have them on their team, to gain the benefits of that subject-matter expertise.


Such a union, I feel, would only aid in making robots more moral, more empathetic, and thus less uncanny, more easily fitting within our society. We of course won't be able to completely erase the seems between Moist and Non-Moist Robots, but we'll at least be better able to hide them.


Again, I'm sure AI devs probably do this all this already, but in case not ...


You're welcome

My offer still stands to lend out my artistic and philosophic talents, by the way.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page