[NetBehaviour] Search term as text

Curt Cloninger curt at lab404.com
Thu Feb 18 23:13:18 CET 2010


Hi Jim,

Here is a paper published in 2006 by Google engineers about how the 
"safe search" image filter algorithm works:
http://www.cs.cmu.edu/~har/visapp2006.pdf
They describe algorithms that analyze the image itself (looking for 
skin tones, certain edges), apart from any consideration of the human 
language context surrounding the image on the web page where it 
appears.

I assume the imgcolor feature works the same way. That's why these 
results can be so different:
http://images.google.com/images?q=%22in+rainbows%22&imgcolor=red
http://images.google.com/images?q=%22in+rainbows%22&imgcolor=orange
http://images.google.com/images?q=%22in+rainbows%22&imgcolor=yellow
The human language context surrounding those album cover images is 
more or less the same on each web page. The images that appear with 
the filter "imgcolor=red" don't appear because someone used the word 
"red" on those web pages.

No sense hypothesizing which it is. The algorithm either is or is not 
taking into account human language. We'd just need to dig up the spec 
(which is probably proprietary).

+++++++++++++

More interesting to me  is the fact that color (and image content) 
can be analyzed mathematically, without recourse to contextual human 
language tags. The software doesn't need the word "red" surrounding 
an image to find a "red" image. We humans just need a "red" interface 
button so we can access the results. (Incidentally, I love how 
google's interface button isn't the word "red," but is itself an 
image of the color formerly known as "red." [Although they do have a 
title="red" tag in the HTML link which causes the word "red" to 
appear when you hover over the red button.])

Even more interesting to me (beyond the back-end tech stuff) is the 
"curious" historical mashups that result from minimal input:
I Am Curious [Yellow] (1967/2009) - Curious + Yellow + Google
http://images.google.com/images?q=curious&imgcolor=yellow
"common" language translated into "proper" historically-contingent 
imagery -- Curious George, The Man with the Yellow Hat, and '60s 
Swedish Erotica, all "performed" by entering the word "curious," 
selecting the color "yellow," and clicking "search."

I can refer you to an archived, static object/image of this 
performance (in the form of a meta-image):
http://bit.ly/bIHxJ1
Or you can perform the piece yourself in real time right now:
http://images.google.com/images?q=curious&imgcolor=yellow

Best,
Curt


>http://images.google.com/images?q=vermilion
>http://images.google.com/images?q=chartreuse
>http://images.google.com/images?q=turquoise
>http://images.google.com/images?q=orchid
>http://images.google.com/images?q=orchid+dark
>http://images.google.com/images?q=gold
>
>these all work probably as well as something that samples the colors of the
>images. and only because of careful consideration of these, on the part of
>the google algorithms: the language in the html page in which the image is
>embedded; especially the language within and right around the <img> tag; the
>user response volume (click volume) to the image in image search results. in
>other words, semantic analysis begins with establishing correlations and
>then testing those correlations against human response. this is how the
>machine learns the meaning of 'red'.
>
>ja
>
>_______________________________________________
>NetBehaviour mailing list
>NetBehaviour at netbehaviour.org
>http://www.netbehaviour.org/mailman/listinfo/netbehaviour




More information about the NetBehaviour mailing list