Tuesday, September 20, 2016

Still Data Fail

Amazon still thinks I'm a student, but for years I've told them I'm not. How is this a good use of customer data? How is this responsive to customers? It's insane and idiotic (and annoying when I am trying to give them my money, they make it harder to do so, but yes ok ok I am still an Amazon customer so what do they care?).

Here's a screen grab from this month, September 2016:


But I've told their online help people that I'm not a student, back in April as you can see here and previously in January of 2015 as you can see here. So much for customer feedback.

Additionally, Twitter's recommender needs some help:

Data & Society is an incorporated entity like Valvoline, but the two organizations are nothing alike and neither are their Twitter feeds. Valvoline isn't marked as a "sponsored" post (i.e., paid advertising), and even if it were the mismatch is just hilarious.

And currently I live in New York and I don't own a car.

The data is there, people just aren't using it well at all.

Monday, September 12, 2016

Star Trek and the Future

Although this has been acknowledged before (Star Trek inventing the future and all that), during the recent 50th Anniversary of Star Trek where many or all of the original episodes (remastered so they look nicer on today's televisions) were shown, I was moved by the earpieces they use, since they are like clunky Bluetooth earpieces. Here's a quick (and thus blurry) photo I grabbed off my TV with Spock in the foreground with his easily observable earpiece and Uhura in the background adjusting hers. Granted for TV such technologies would need to be easily observable by the audience, especially in the mid-1960s, today not so much since we actually have these things.

Thursday, July 28, 2016

Online Game Communities Presentation

About a month ago I did an online presentation for a summer class taught by Dr. Jaime Banks, who was over in Germany at the time, for the summer session she and Dr. Nick Bowman are involved with, SPICE: Summer Program in Communications Erfurt. It was really great, and the students had some good questions. I put the slides (slightly edited) up in Slideshare, you can find them here. The talk looked at some work in gaming, play, and communities, using different data. Just the slides are not as good as the slides and the audio, but there they are.

Sunday, July 24, 2016

TKinter, ttk, and Progressbar

tl;dr: ttk.Progressbar is max 99 by default, not 100, despite the documentation. If you try to overfill it, it won't accept the call that does so.

I was building a front end for a scraper app, and at first I tried Xcode and the Interface Builder (which I first saw over two decades ago on a NeXT machine, it was glorious then and it still is), but I couldn't get it to mesh with my Python code (so much of the online help is out of date). A friend told me I was being an idiot and should try something simpler, and I settled on TKinter, which had me up and running in very little time. (The front end took only two days, but I wasn't committing every waking hour to it, and I had to figure out how to take my linear Python script and conceive of it in the looping GUI manner, which was difficult.)

I wanted a text box field so the scraper could print to it like it does with Python's print statement to the terminal (but I don't want the user to have to deal with the terminal or the console). I ended up using ScrolledText, which you have to import (well as far as I can tell, and it's working, so, once it works, I don't have time to poke at it too much). (NB: with ScrolledText, I needed setgrid=True to make the frames resize nicely, this was VITAL, packing frames in TKinter is an art I do not yet understand, and with the ScrolledText field, you might want state='normal' to print to it, then state='disabled' so the user doesn't type in it [but loses copy capability], you'll want insert(END, new_string) to print at the bottom of your field, but then you also need see(END) so that it scrolls to the bottom -- otherwise it prints on the bottom but the view stays put at the top. Details.)

Then I wanted two progress bars, one to show the user the scrape progress and the second to show the parsing progress. The scraping one I needed to fudge a little, so I tried....

my_window.scrape_progress.step(10) # first init step
my_window.scrape_progress.step(20) # bigger step

my_window.scrape_progress.step(20) # another bigger step
my_window.scrape_progress.step(50) # jump to done!

Where scrape_progress is the name of the Progressbar object for my scraping progress.

As you can see, that's 10 + 20 + 20 + 50 = 100.

The bar would fill 10% (10), then to about 30% (10+20), then to about 50% (10+20+20), then it wouldn't fill anymore.

Eventually out of annoyance when trying alternatives, instead of 50 in the last step I used 49, and it worked.

So no, the max is not 100, it's 99, so the bar values are probably 0-99 for 100 increments, as 0-100 would be 101 increments. I suspect that step(100) won't work, but step(99) should fill it to 100%.

Some code:

from Tkinter import *
from ttk import * # ttk widgets should overwrite Tkinter ones in the namespace.
import ScrolledText as tkst  # Not sure why this is its own library.

# from my window class def, nothing to do with the Progressbar:
def print_to_text_field(self, the_string): 
new_string = '\n' + the_string
self.the_text_field.configure(state='normal')
self.the_text_field.insert(END, new_string)
self.the_text_field.see(END) 
self.the_text_field.configure(state='disabled')
tk_root.update()
tk_root.update_idletasks()


Monday, July 4, 2016

Making a Spectrum/Gradient Color Palette for R / iGraph

How to make a color gradient palette in R for iGraph (that was written tersely for search engine results), since despite some online help I still had a really hard time figuring it out. As usual, now that it works, it doesn't seem to hard, but anyways.


(I had forgotten how horrible blogger is at R code with the "gets" syntax, the arrow, the less than with a dash. Google parses it as code, not text, and it just barfs all over the page, so I think I have to use the equal sign [old school R] instead. It is also completely failing at typeface changes from courier back to default. I see why people use WordPress....)

The way I will do it here takes six steps (and so six lines of code). There are a few different ways you could do this, such as where you set the gradient or if you assign the vertices (nodes) the colors in the graph object or at use them at the time of drawing but not actually assigning them in the graph object itself. The variable I based the gradient on is an integer, and given my analysis I'm making a ratio of "for each item in my data, what is its percentage on that variable compared to the maximum?" It's a character level in a game, so if a character is level 5 and the max level is 10, then the value I want is 0.5 (i.e. half).

Keep in mind that the gradient you use here isn't analog (like a rainbow with thousands [more I think] of colors), it's a finite number of colors, with a starting color and an ending color. If your resolution is 10 then you have ten colors in your gradient, determined by the software as 8 steps between the color you told it to start at and the color you told it to end at (8 steps + start color + end color = 10 colors).

The general conceptual steps for how I did it:
  1. Set the resolution for the gradient, that is, how many color steps there are/you want.
  2. Set up the palette object with a start color and an end color. (Don't call it "palette" like I did at first, that is apparently some other object and it will blow up your code but the error message won't help with figuring it out.)
  3. You'll want a vector of values that will match to colors in the gradient for your observations, for what I'm doing I got the maximum on the variable in one step...
  4. And then set up the vector in the second step (so, this is a vector of the same length as the number of observations you have, since each value represents the value that matches up against a color in the gradient). (In my code here, it's a ratio, but the point is you have numerical values for your observations [your nodes] that will be matched to colors in the gradient.)
  5. Create a vector that is your gradient that has the correct color value for each observation. (The examples of this I could find online were very confusing, and that's why I'm making this post.)
  6. Draw! (Or you could assign colors to your graph object and then draw.)
Let's look at some code and, on occasion, the resulting objects. (I'll include the code as one code block below this explained version.)

Don't forget library(igraph) 

Also, if you're new to iGraph, note that it uses slightly odd (well to me at least) syntax, or you can use slightly odd syntax, to access and assign values to the nodes, that is, the Vertices of your graph, with V(your_igraph_object), which looks a little odd when you do V(g)$my_variable, for instance. (Below I do use "my_whatever" to highlight user made objects, except I did use just "g" for my iGraph graph object.)

Also note that, I think, the my_palette object is actually a function, but it definitely isn't a "palette" in the sense of a selection (or vector) of colors or color values. I think that is part of what makes line 4, below, unusual. Maybe I should have used my_palette_f to be more clear, but if you've made it this far, I have faith in you. (Also note that colorRampPalette is part of R, not part of iGraph.)

Using the language from the above steps...
  1. Set resolution, I'm using 100: my_resolution = 100
  2. Set palette end points, this starts with low values at blue and high values at red: my_palette = colorRampPalette(c('blue','red'))
  3. Get the max from your variable you want colorized to make the ratio: my_max = max(V(g)$my_var_of_interest, na.rm=TRUE)
  4. Create your vector of values which will determine the color values for each node. For me it was a ratio, so based on the max value: my_vector = V(g)$my_var_of_interest / my_max
    • Notice here we have iGraph's V(g)$var syntax.
  5. Create the vector of color values, based on your variable of interest and the palette end points and the resolution (how many steps of colors). This will give you a vector of color values with the correct color value in the correct location for your variables in your df-like object: my_colors = my_palette(my_resolution)[as.numeric(cut(my_vector, breaks=my_resolution))]
    • Ok let's explain that. Take my_vector, and bin it into a number of parts -- how many? That's set by the resolution variable (my_resolution). By "bin" I mean cut it up, divide it up, separate it into my_resolution number of elements. So if I have 200 items, I am still going to have 100 colors because I want to see where on the spectrum they all fall. Take that vector as.numeric (since maybe it comes back as factors, I don't know, I didn't poke at that.) Send that resulting vector of numeric elements (which are determined by my_var_of_interest and my_resolution) to the my_palette function along with my_resolution, which returns a vector of hex color values which are the colors you want in the correct order.
  6. Draw! plot(g, vertex.color=my_colors)
    • Note that we aren't modifying the colors in the iGraph object, we're just assigning them at run time for plot(). We could assign them to the iGraph object and them draw the graph instead.
Done! Let's look at two of the resulting vectors (but you should be using RStudio of course so you can see them anyways), as when I did it helped me understand what was going on.

So, my_vector is the vector of values for the variable of interest which determine the colors. They aren't the color values themselves, they are the positions on the scale which will get mapped to colors in the spectrum / gradient. (Note I have 1,019 observations in this data.)

my_vector   num [1:1019] 0.31 0.581 0.112 0.108 0.181 ...

So, we can see these are ratios and we know they're between 0 and 1 since that's how I set it up. (A percentage of the max value in this data.) These will map to the right colors in the gradient. Note we can change the gradient, either its start color, end color, or the resolution (how many steps), and this my_vector won't change. This my_vector gets mapped to the colors. What the colors in the gradient are depends on the start color, the end color, and how many steps in the gradient there are.

Then there is also my_colors, which have colors in hex! Exciting to see it work.

my_colors   chr [1:1019] "#4D00B1" "#92006C" "#1900E5" "#1900E5" ...

If you are great at mentally mapping hex RGB values to colors between blue and red to a percentage between blue and red (blue and red being the start [i.e. 0] and end [i.e. 1] points as determined in line 2 up above) you'll note that the values in my_vector do indeed map to the colors in my_colors which is cool. (You will notice all the middle two values, the green in RGB, are 00, since there is no green when you go from blue to red.) Note that the 3rd and 4th values in the hex list (my_colors) are the same, as they are mapping from 0.112 and 0.108, which are, when binned into 100 bins, both being approximated to, most likely, 0.11. Thus they have the same color value (which is 19 in hex of red, RGB or #RRGGBB, and E5 of blue, so E5 is out of FF max, so lots of blue and a little red, as they are both 11% of the way on the scale from the bottom (blue) end to the top (red) end. This makes sense.)

So, there you go.

# Set up resolution and palette.
my_resolution = 100
my_palette    = colorRampPalette(c('blue','red'))

# This gives you the colors you want for every point.
my_max    = max(V(g)$my_var_of_interest, na.rm=TRUE)
my_vector = V(g)$my_var_of_interest / my_max
my_colors = my_palette(my_resolution)[as.numeric(cut(my_vector, breaks=my_resolution))]

# Now you just need to plot it with those colors.
plot(g, vertex.color=my_colors)

Sunday, July 3, 2016

Gephi and iGraph: graphml

When Gephi, which is great, decides to not exactly work, you can save your Gephi graph file in graphml format and then import it into R (or Python or C/C++) using iGraph so you can also draw it the way you were hoping to. (I'm having an issue with setting the colors at all in Gephi.)

It took me a few tries to figure out which format would work. I need location (since Gephi is good at that but I don't know how to make iGraph or R's SNA package do that) and attributes for the data. So far, so good!

Some helpful pages:


Note!!!! Apparently if you make a variable in R (at least while trying to graph something with plot) and you use a variable for your palette that you name palette, you will destroy (ok ok overwrite) some other official variable or setting also named palette, but the error you get will not at all clue you in to what happened. Better to call your variable my_palette or the_palette, which is what I usually do (so why didn't I do it here?).

Saturday, June 18, 2016

Best Reviewer Award

And, here's the certificate! Nice and pixely.

Nat Poor, Best Paper Reviewer!

Wednesday, June 15, 2016

Recent Travel

I've been to Germany for ICWSM 2016, then Paris, then Hong Kong, then Japan for ICA 2016. You can see some of my travel photos in Instagram. 4 weeks on the road.

ICA 2016, Fukuoka, Japan

Had a great and busy time at ICA 2016: one paper, one panel presentation, moderated a session, and won an award! (Google is being impossible with photos and tables as usual. So much for interfaces.)

I was lucky enough to be invited to speak on the new Computational Methods panel, for the CM interest group. I tried to give the crowd an exhortation to engaging with such methods, because we as social scientists have a lot to offer computational analyses. You can see the slides in SlideShare, but I don't spell it all out in the slides when I present. My presentation got a nice tweet too!

Presenting on the Computational Methods panel.
As part of the Games Division pre-conference in Tokyo at Nihon University (I love the neighborhood there, the Ekoda stop on the Seibu-Ikebukuro line), we all went to Akihabara, and of course we saw and did cool things, like engage in deep discourse with Mario, the working-class Italian-Japanese plumber.

"You don't think quantitative and qualitative methods
are complementary? Explain!"

I also was lucky enough to run into Sanrio's Gudetama in Hong Kong and then again in Japan.



Gudetama!



I also won the very first "Best Reviewer Award" for the ICA Games Division, which is a great honor and we need more motivations like this, as reviews are an important part of the quality of the discipline.

Awards for organizing, best papers, and best reviewer!

CityU Hong Kong Summer School

Had a great time teaching a class and also an impromptu session on Gephi at the City University of Hong Kong's Summer School in Social Science Research! It's in the Department of Media and Communication, and run by my friend Dr. Marko Skoric. The main instructor was Dr. Wouter van Atteveldt, who is awesome and has great hats as you can see.

I also was fortunate enough to attend CityU's Workshop on Computational Approaches to Big Data in the Social Sciences and Humanities, which was great and had lots of great speakers.

Me, showing some great students a few things about Gephi.


The three of us in front of the department sign.

Tuesday, April 19, 2016

When Companies Fail The Data

Recently, I have encountered three examples of how giant data gathering companies have completely failed to use that data in any sensible way. The companies are Facebook, Amazon, and Pandora.


Facebook served me an ad that said Sylvester Stallone had died without actually using any direct "passed away" words or phrases (since he hadn't). This is offensive, it's a lie, and I am not a particular fan of Stallone's films although Rocky is a classic (but Cop Land, are you kidding me?).

Amazon continues to insist I might want Amazon Student, despite my explaining to them over a year ago that I am not a student (and my account is 16 years old). 

Pandora continues to serve me ads in Spanish (which I do speak but I'm not fluent) and for cars (I don't own a car). I even told a tech support person this and he said there was nothing he could do about it.

These examples all point to the issue of not using the data you have and not taking direct information (data) from the user when the user gives it to you (which is much easier than trying to infer it, if indeed the user is truthful). 

Facebook
The Facebook ad is hugely problematic. The conclusions are that:
  1. The people at Facebook do not care about the accuracy of the ads they serve.
  2. The people at Facebook do not care if the ads they serve are purely for emotional manipulation.
  3. The people at Facebook are not using the 11 years of data they have on me to realize that I would not like this ad because:
    1. I do not like advertisements that lie.
    2. I do not like advertisements that manipulate.
    3. I am not a fan of Sylvester Stallone.
They have the data. They aren't using it.

Amazon
That Amazon thinks I am student, even though I've told them I am not and even though they can see my account has been buying stuff for 16 years, is bizarre. I told a tech support person that I am not a student. Yet, the algorithm they maintain apparently is not given this information at all and continues to annoy me with an extra page when I am trying to check out (yes, a good problem to have). 

They have the data. They aren't using it.

Pandora
I grew up listening to FM radio, so I'm used to radio with ads. I so far use the free version of Pandora which has ads, and I think that's fine (people should get paid). However, I am not fluent in Spanish, so Spanish language ads are wasted on me (it's a waste of money to those advertisers) and also I don't own a car, but I get ads for car service stuff (I don't even remember what, but the problem is the same). So, since I think it would actually be nice to be served appropriate ads, and that those companies are getting their money's worth, I text-chatted with a Pandora text person. He said he had no way to mark my account indicating that I do not speak Spanish.

And yes, I know the image is an ad for Flonase, not for cars, it just happens to have a car--I use it here because it's in Spanish (although I am more complaining about the audio ads, images clearly work better here).

Again, they have the data. They aren't using it.

Overall
For me, these are good problems to have. I have internet access and can buy books (although if it's new I'll try to get it from my local non-chain bookstore -- yes I am serious). But all of these issues are annoying, not just because inappropriate content is being served to me, but that the companies should know better than to do that, and in all cases, they either have enough information on me, or I try to give it to them, and they still can't do it. And that's the distressing part: in this age of total information, some of the biggest information companies still don't know how to use data.


Thursday, April 14, 2016

For A Decent CSV Spreadsheet App

All I want is a decent spreadsheet app that does not insist on mangling my CSV files, which often have ID numbers in them which I might want to view as text and not numbers. Apple's Numbers is maddening (you have to export to CSV, extra steps, and it has a relatively low row limit, 65,535 I believe) and Microsoft's Excel is a little better but I'll use it as an example here of What You See Is Not What You Get.

I am doing some work on cities and (county-level) FIPS codes (so, in the US, FIPS codes are Federal level identifiers useful for a lot of things, they identify counties). Some cities are large and lie in more than one county. Some of the data I have deals with cities, and the income data is on the county level, so I need to map from cities to county FIPS.

Excel did not make this easy.

The file I grabbed off the net to help me map cities to FIPS (counties) quite correctly listed all the appropriate FIPS codes for each city. I needed to narrow this down to one (Wikipedia helped a lot, the geopolitical Wikipedians are nitpickers).

FIPS codes for counties have two parts, two leading digits for the state and then three digits for the county. So all FIPS codes that start with 36, for instance, are counties in New York state.

The format from my source file looked like this:

Raleigh, NC:    37063,183
Birmingham, AL: 01073,117
New York, NY:   36005,047,061,081,085

(I am pretty sure those 5 numbers for NYC are the 5 boroughs, I know Brooklyn is its own county, Kings county.)

Excel, however, would show the following in the main view, interpreting these IDs as numbers--errors are in the parentheses, A, B, and C:
Raleigh, NC:    37,063,183 (A)
Birmingham, AL: 1,073,117 (A,B)
New York, NY:   36,005,047,061,081,000 (A,C)

Errors:
  1. Added a comma that isn't there.
  2. Dropped leading zero.
  3. Rounded rightside digits.
So there are at least three issues there, but the most difficult one is that it put a comma in after the two digits for the state, initially making me think that indeed the source file had a comma after the state component of the FIPS code. It did not. Parsing the file did not work.

That was all extremely infuriating, and reminded me of Microsoft's Clippy, where the coders thought they always knew better than you. Granted, a lot of apps and even programming language packages try to be smart and guess formats, and yes this can be useful. But if there are leading zeros and commas in odd places (or not) and it's a CSV (text) file, there could be a default "read CSV as text". Of course it seems that neither of these two programs have been coded to play nice with CSV files.

As such, they are not overly useful data science tools.

Tuesday, April 5, 2016

Case Study in Data Ethics at Data & Society

I am pleased to announce that a case study on data ethics, by myself and co-author Dr. Roei Davidson, has been published at Data & Society! Titled "The Ethics of Using Hacked Data: Patreon’s Data Hack and Academic Data Standards", we look at issues around using hacked data (or not).

Basically, no.

But I wanted to. See the paper for details! (It's free and concise, don't worry.)

Thursday, March 24, 2016

Microsoft's Epic Twitterbot Fail

If you read this blog, you've read about the rather hilarious failure of Microsoft's experiment with a learning Twitter bot. Trolls gave it so much input it started turning out hateful, sexist, racist tweets.

So we really have to wonder...

  1. Why are Microsoft engineers so ignorant of Internet culture?
  2. Why Microsoft engineers who program text-based bots have no idea about the range of text available?
Because these are epic failures. Epic. No wonder there are jokes about engineers being completely socially inept.

Monday, March 14, 2016

Plagued By Bad Design, Still

Design, from websites to cities to forks, is so important, all around us, and so easy to get right--but also easy to get wrong in some cases. Here's one that was easy to get right, but the designers and people who approved it still got it wrong (don't they even test these things?).

The NYC MTA information/help audio posts found in many subway stations have two words, and two buttons, as you can almost see in the first photo. Except that the second button is really hard to see (although this photo unintentionally made it worse than usual, but it's still pretty bad).

Actual info post thing.

There are two overall problems, which you can see a little in the below photo.

  1. The physical placement of the words in relation to the buttons. 
  2. The color of the buttons. 
At first glance it looks like there is one Emergency Information button. But there is a second, dark, button there. But the word Information is closest, out of both words, to the red button, and the red button is closest to the word Information. So the red button and the word Information must have some relationship.

They don't.

Notice the yellow lines are longer than the blue line.


Clearly, the Information button should be easier to see, and the two words and their actual buttons should be visually obviously related, that is, by distance (although you could also do color). One solution would look like this:
Much better!
I don't even have a degree in design. This isn't rocket science.

Sunday, March 6, 2016

Yelverton Seven

We held the seventh installment of the Yelverton Sessions (Yelverton Seven) in conjunction with CSCW 2016. Named after the location of the third meeting, held in Yelverton, England, the Yelverton Sessions involve both intensive work sessions combined with cultural and natural places of interest not only as a break but as inspiration. And, a lot of coffee and good food. They usually, but not always, are in conjunction with a conference.

We voted to name it after the third session as by then we realized that yes, this was a sustained effort we wanted to continue. And, who doesn't like the word Yelverton?

  1. Yelverton One, Bangor Maine and Fredericton Canada (ICA 2011).
  2. Yelverton Two, Flagstaff Arizona and The Grand Canyon (ICA 2012).
  3. Yelverton Three, Devon England (ICA 2013). 
  4. Yelverton Four, Bainbridge Washington (ICA 2014).
  5. Yelverton Five, Hong Kong (WUN Understanding Global Digital Cultures 2015).
  6. Yelverton Six, Austin Texas (2016).
  7. Yelverton Seven, Santa Cruz California (CSCW 2016). 
We don't have Y8 scheduled yet, but it will happen at some point!

NYC School of Data

Spent most of the day yesterday at the NYC School of Data conference -- accurately billed as "NYC's civic technology & open data conference." Sponsored by a wide variety of organizations, such as Microsoft, Data & Society, the day involved a lot of great organizations such as various NYC government data departments, included great NYC people such as Manhattan Borough President Gale Brewer and New York City council member Ben Kallos, and was held at my workplace, the awesome Civic Hall.

CSCW 2016

Just got back from CSCW 2016 in San Francisco -- was part of a great pre-conference workshop on data ethics, saw some great papers and some great people. Also, telepresence robots!

Friday, February 12, 2016

UT Austin!

Just spent some time down in Austin with some friends and colleagues, what a great time and a great place! (Natalie, JD and soon to be PhD, wrote about it too.)

 Most of our working time was at IC^2 (Innovation, Creativity & Capital), which is ever so slightly off campus, but it's a nice walk and that means it's quiet and you can get a lot of work done.
 We also stopped by both Communication Studies (Hearst, of course!) and the School of Information for seminars.
Yes, we actually did a ton of work. (RStudio, variables, models, theorizing, all that good stuff. And coffee.)

Tuesday, January 19, 2016

Meaningless Data Viz

This Google Trends data visualization is horrible. It does indeed show "top searched candidate by state", I would guess, but that doesn't at all mean what the map implies it means -- that is, positive popularity of that candidate and also a lead over the other candidates. It doesn't even come close to showing that.



The data underlying this map could be any one of these completely different scenarios, using just the first three listed candidates to show the problem:

Some Example Possibilities
CandidateState AState BState C
1. Trump11,000,0001,000,000
2. Cruz00999,999
3. Rubio00999,999

The order of the candidates in the image may be from the data, or it may be from polls, or it may be something else, we don't know.

In theoretical State A, Trump does lead, but it's meaningless and no one is searching.

In theoretical State B, Trump leads, in a statistically meaningful manner, and people are searching (but we don't know exactly on what terms, "Trump liar" and "Trump bankruptcy" and "Trump racist" are not endearing search terms).

In theoretical State C, Trump leads, but it's a statistical tie, and lots of people are searching.

Each of these scenarios are massively different, yet they would all result in the same visualization.

There are other numerical combinations, this is just a sample of three.

This visualization also conflate geography for population, that is it doesn't have any state level per-capita correction. For this you need, I have learned, a cartogram (I think I've linked to that page before, it's really informative--here's one for the world with a slightly different approach). And, it only considers people who have internet access and who are using Google and who are actively searching during the debate. That leaves out lots of people.

And, it leaves out anything that isn't a state (such as Puerto Rico), although I assume Washington, DC, is in there (who can tell?). It also, and this is a minor peeve, makes it look like the top of Minnesota is connected by land (it isn't).

Edit: Apparently, this map is actually from Google, their "Google News Lab" according to one video where I got this map for the Democrats and it suffers the exact same problem:

Tuesday, January 12, 2016

HICSS 2016

Just spent a great week in Hawaii at HICSS 2016. Some great people and great papers! Also a few not so great papers and some not great presentations, which are not problems I recall from previous HICSS.

And, I am now co-chairing the new Games and Gaming mini-track in the Digital and Social Media track, so, there's some work to do there. Should be awesome!

Update: The G&G mini-track has been approved and the CFP is out!