Eyeo Festival – Notes, Day 1
Monday June 27, 2011
Processing 2.0 – Ben Fry, Casey Reas
Since 2001, Fry and Reas have developed Processing, an open source programming environment created for the visual arts. In this presentation, they will discuss the past, present, and future of the project as it nears the 2.0 release.
Processing was founded on the idea that programming and education go hand-in-hand, and this has been key in the evolution of Processing. Another foundational idea behind the development of Processing is that we need the ability to be able to sketch through programming in order to prototype. This has driven the development of Processing as a tool for programming in a visual arts context. Processing was made for teaching programming – it is a bridge to other programming languages. Processing also has a community infrastructure, and is extensible through libraries, and has a concise and scalable IDE.
Check out the Processing libraries for enabling audio, video, and communicating with other devices: Processing.org/reference/libraries
People using processing as platform to design cross-media. There are a lot of great online tutorials and books out there on Processing – one example is Processing: A Programming Handbook for Visual Designers and Artists by Casey Reas.
Truth and Beauty – Moritz Stefaner
Moritz Stefaner keeps chasing the perfect form for information. In his talk, he will deconstruct some of his recent works, shedding some light on his work process and the rationale behind the design decisions. We will also learn about some of his experiences in how to make a living as a freelance information visualizer, and why a flower garden can be a perfectly fine data visualization of the latest OECD country ranking. Also, he has a lovely German accent.
Stefaner began as a web designer, then moved to Cognitive Science and explored how we work with information. He feels that truth and beauty should guide the work that we do in data visualization. He began working on data visualization through an interest in navigating information spaces. He got his masters at Potsdam in Interface Design. He now titles himself "Truth and Beauty Operator" – this is very fitting for his focus and his work. Some themes that Stefaner is interested in are integrity, form, and function.
One of his first interactive visualizations – Organic Link Network
Another example he gave was Notabilia: Visualizing Deletion Discussions on Wikipedia. Stefaner shared his process for designing Notabilia, a visualization of deletion discussions on Wikipedia. He shared sketches, experiments, and drafts, highlighting the need to truly explore the data in visual form. He explained how he arrived at the final format, and how the organic tree structure evolved naturally from the data.
His process involves making a ton of different charts to explore the data, do lots of experiments, generate many visualizations, in order to understand the limitations of the data and find a quick way for our eye to calculate and understand. In this way he is able to reduce the data down to what is most interesting. He asks what does the visualization evoke beyond just the literal meaning of the data?
In the context of sharing his Twitter visualization, Revisit, he notes that when you are working with dynamic data, you need to figure out what do you keep, and what do you throw away. Revisit was used to show the #eyeo Twitter feed throughout the Eyeo conference.
(An archive of the Eyeo Twitter Revisit can be found here: http://moritz.stefaner.eu/projects/revisit/index_eyeo.html )
Another project that Stefaner shared was Map Your Moves, a visual exploration of where New Yorkers moved in the last decade. The visualization allows you to make selections and explore trends (if any) and reasons for why people move in or out of a location. Data was collected from a survey broadcasted in the New York area. Because of this, Stefaner chose to put a lens on New York and give a more logarithmic sense of distance in relation to the other locations on the "map."
One of the main examples he shared was the OECD Better Life Index – a different take on country rankings. In this visualization, you can explore a new way of comparing country rankings, create your own "better life index," and share what you create. A revealing look at well-being in 34 countries around the world. Stefaner shared the many experimental variations generated in his development process which were crucial in uncovering the final format. A notable part of the process was that he let the data drive the final decision on the format, and from exploration landed on the blooming flower-like structure to present the rich dataset. In this project Stefaner noted the importance to stay in-line with the organization's branding, and how that played a bit into the design of the final piece.
On an ending note, Stefaner asked us to consider what happens after a data visualization is launched – how are data visualizations being "remixed" by others? How do people use the data visualization? How does it feed back into other visualizations? He invites us to explore "data remixes" and data and visualizations as re-mixable cultural items. He stressed the importance of having the data first though – that you need to ground your work in real data. Stefaner suggests an openness in the process of creating data visualizations – "Don't try to invent, discover!" he says. "Data visualization is about telling a thousand stories, but not all at once."
Greg J. Smith
Auto/biography: Data, Identity and Narrative – Greg J. Smith, Janet Abrams, Jer Thorp, Nicholas Felton
Identity has long been intertwined with key fragments of information: social insurance and credit card numbers, a current address, a passport and driver’s license, etc. While diary keeping may seem quaint and antiquated, the computation that drives contemporary culture has engendered a new era of pervasive surveillance where almost every discrete act/transaction/waypoint is logged on a server somewhere. In this session we will don our optimist glasses and discuss how ubiquitous data is inspiring new approaches for articulating autobiography, personal trajectories and neighbourhood narratives. The federal government distills your essence down to a census form, and Citibank might think of you as a set of purchase patterns – how can we co-opt and critically engage these approaches through visualization and mapping? More importantly: what can we learn about ourselves?
This panel brought up a lot of interesting topics and questions. I will attempt to summarize here.
We are in need of a new way of thinking about personal narratives and biographies. This is already being explored in works such as the Feltron report, but is becoming more apparent in our everyday lives, such as in Facebook use. What is your "personal data profile shadow" that you are casting through use of social media, online accounts, memberships? Even physical objects are becoming part of this "internet" of things, objects that speak to your personal narrative. Felton takes an interest in history, memory, keeping, and discarding. Where does data conceal meaning rather than reveal? What is truly personal vs. what we want to share and present as our "personal lives?" What is the role of large amounts of free data storage in the evolution of this discussion and our own personal lives?
Early visualization emerged from the challenge of representing computer networks. We have now moved from mapping computers to mapping people. Through our virtual presence, we now feel that we belong in a multiplicity of communities. What is the qualitative difference between the connection in person vs. the virtual connection? How can we knit together questions of embodiment, the relationship of people and places? What is the intertwining of communities of physical and virtual? Does the virtual play a part in inspiring a more specific interest in the physical?
Ownership of data will become a big focus of the near future, and the value of our data. Especially as we transition from artifacts to digital storage. What is he legal tie between you and the data about you? Thorp believes owning your data should be a right. Open Paths (https://openpaths.cc) allows you to secure ownership of your data by uploading it. From there you can donate your data to research. But even more powerful, it gives you a tool to re-live parts of your life through visualization of your data. People are not willing to share current location, but happy to share the past trail of their location data – this time-shift is important to understand. The power of your personal data, the weight of personal data is visceral. How will this re-living of personal data stories impact us? How can we provide the emotional experience of personal data to more people? This data that we are leaving could be the memorial of our lives. What would a data memorial look like?
Felton has origins in storytelling, stories he had best access to and ownership over were his, so that is where he chose to draw from. In a way, it is a study in Anthropology/Archeology. He takes data and imbues with meaning retroactively through context. The process and resulting narrative reveals things about yourself. At first, Felton was mainly curious, but over time his curiosity about his activities has become deeper and is now more like an addiction. What does it mean that Felton is his own historian? We should be empowered to reconstruct our own histories. Just like a photo album becomes a reconstruction of family history. When data is put back into a human context, it is much more interesting. It is surprising how much the personal narrative can effect you, such as photo compilations. Felton began his "annual report" choosing that title as the genre already existed, so it was a good platform to start from. He never intended it to be a parody of corporate annual reports.
An interesting area to explore now is social cluster biographies – connections between people who belong to a community, and looking at connections of interests over time and geography. There is a continuing evolution of Facebook friendships for example. Why and how are people influencing each other? Why are we disappointed with certain histories (like LinkedIn visualized in comparison to mapping your location over the past year)?
The question is, are you willing to put all your info into these services in order to get the result? What is people's investment in this? How do you get people to invest in it? Our outlook on what is public or private is shifting, but if we own our own data, then that data becomes worth a value. We are already in this exchange whether we know it or not – We pay Facebook with our information for example. If you think about it, health, insurance, credit card, mortgages... your data is important!
Beyond the Bar Graph (A Visual Narrative with Data) – Wes Grubbs
As the use of data visualization is growing exponentially across practically every profession today, we can see now that the way we understand complex relationships can’t always speak to us through an x- and y-axis. Sometimes we need more thought provoking depictions of data, just as we do with music and literature, to understand the world around us. Design theory is being applied to tell a story and give a visual narrative in diagrams more often than ever before. Is this a good thing or a wrongful manipulation of facts? Wes will demonstrate the importance of visual metaphors and their effectiveness, especially when drawing complex, multi-dimensional relationships.
Don't focus on design upfront, focus on the data. To understand data visualization, you have to understand statistics. Look at your data and ask the question - Is there a story here? You have to have data before you can concept anything.
Our brains are hardwired to remember imagery, not digits. Imagery, metaphor, and memory go hand-in-hand. How can you best represent your data visually? There are some pitfalls, for example circles are dangerous because there is an inaccuracy in the way we perceive circles. Choose a visual format that best fits and intuitively communicates your story.
You can be true to the research but not tell the true story if your data set is incomplete. Data is not truth. Data is our truth and the story we want to tell. You need to look at the big picture to capture the real story.You need clay to make bricks. You need real, accurate, and complete data to tell the story. Annotation is key in order to tell the story behind the data.
An example shown was the Invisible City, featured in Wired Magazine. Techniques used were: varied size of text, stream graphs, stacked bar graphs, annotation, color. They expected the piece to engage the audience and draw people in.
Data Viz 101 – Getting Started with Data Visualization – Jer Thorp, Wes Grubbs
Jer and Wes will discuss the process of visualizing data. How to collect data, analyze it, and ultimately work with it to create visualizations, are the key points of focus in this class. This class is geared for anyone new to data visualization or those with experience who’d like to brush up on their skills. While no previous programming experience is required, to fully participate in the class, you should have Processing 1.5 or later installed. Download it at Processing.org.
Data visualization in three steps (for the most part):
1) Gather the data
Find something that is meaningful to you, for example, your data to start!
2) Parse the data into useful objects
Collect attributes on these objects. Let the data inform this parsing. Let the Objects inside the data inform the structures. The more time we can spend parsing the data, making the data useful, the more agile the data is. Data viz is like cooking. Chop the onions ahead of time!
3) Render the objects on the screen
Find ways to tell a story. Rendering the data allows us to understand that data more. Start even with a simple rendering in a common format to get to know your data – like a bar chart.
Common formats for Processing: CSV, JSON, XML
(XML should be favorite choice, though JSON handling will be better in Processing 2.0)
Values (columns) are typically delimited by commas. Rows are delimited by carriage returns. CSV works for most data that can go in a simple spreadsheet, but it is not good for complex or relational data. Unfortunately there is no built-in support for Processing but you can use JAVA libraries like opencsv to import. A con is that CSV is not very human-readable.
XML is structured as nested nodes. The structure of the data is extremely flexible to define but not as flexible to change. A plus – Processing has built-in support for XML, and many API's will return data in XML format. XML is hierarchical and human-readable. A con is that it can get very bulky (if you have over 2,000 rows you might want a leaner format than XML).
Kepler Exoplanet Candidates
Jer Thorp showed how adding dimensional viewing layers to this made the data more interesting and allowed the viewer to better understand the data. He noted that the original data was in a PDF that he converted to a CSV file.
Check it out on Vimeo
Comparison of the Bible and Quran
Wes mentioned that text is a great data set. The Bible is a good example.
A screenshot of this project
We Feel Fine
This is built using Processing. We Feel Fine scrapes from blog and livejournal feeds and metadata info on people's feelings around the world. It is a database going back to 2004, and luckily for us it has an API! This is a great dataset to play with when you are getting started.
(we did some live coding using this data set to learn some basic Processing methods)
Always look for a random data selection from your data set.
Find some data, ask some questions... vs. ask some questions, find some data.
Put values/variables and arrays at top of code so it's easy to change the variables later.
Ask yourself, what property can we map to that makes the most sense?
If your data is very granular, use position, or color... if it is not that granular, map to a palette of colors, or set of shapes. The trick is to figure out what visual fits what property of the data.
You need to know statistics to do data viz. It would also be good to know a bit of cognitive science - studies on color, shape, etc.
Pick good colors.
Never use a stroke on anything.
Add a little bit of transparency or variance.
Do iterative design... until you feel right.
Iterate: do. do. do. do. do. = fail. fail. fail. fail. fail. – that's the way to do it.
This will produce better design.
Gestural Computing and Speculative Interactions – Golan Levin
I am interested in the “medium of response”, and in the conditions that enable people to experience creative feedback with reactive artworks. This presentation will discuss a wide range of my own projects, with a particular attention to how the use of gestural interfaces, visual abstraction, and information visualization can support new modes of interaction, play, and self-discovery.
Levin gave a sweet talk to end the first night of Eyeo. He rushed us through a sub-set of his body of work, and showed us some of the incredible new projects he is working on. My notes are limited, but here is what I jotted down:
A thought: Infoviz as self examination for society...
Quotable: I don't think the absurd is important, it's crucially important.
Take-away: THE MOON!!!!
Some examples shown:
Secret lives of numbers - http://flong.com/projects/slon/
Messa di voce 2003 - http://flong.com/projects/messa/
Ursonography - http://flong.com/projects/ursonography/
Double-Taker (Snout) - http://flong.com/projects/snout/
Graffiti Markup Language - http://flong.com/projects/gml-experiments/
Last updated July 12, 2011 by Megan