Entries from February 2010
28 February 2010 · 11.58 pm · by anniekins127 · No Comments
I found the section of Gitelman’s New Media <Body> that played on the idea of error messages to be particularly interesting. I never quite thought to think of that dreaded Error 404 page as an indication that the internet is a constantly evolving beast of information and people. To be truthful, I simply scoffed at broken links, decrepit web pages, or sighed in frustration when that picture on Google Images was exactly what I needed for some project, etc. As Gitelman points out, however, these error messages are not just turn-back-now signs–they’re the authoritative, Big Brother voice of the World Wide Web itself, “which is at once authoritative and impersonal–a system of protocols [...] that is seldom acknowledged but always present” (132). I was pretty astounded by the fact that the average life span of a web page is anywhere from 44-100 days (as stated in the chapter), which really isn’t all that long. You’d think that people who put the time and effort into learning how to build a website and then actually built it wouldn’t just slack off with their websites…but then I think of all of the hosting sites that provide templates for quick, easy ways to get a website up in less than 30 minutes. That said, who exactly decides what should be preserved on the web and what shouldn’t? How many of the hundreds of thousands of websites out there should be preserved for “historical” purposes? And, as Gitelman points out, websites are constantly evolving. Does preserving a website mean locking it into one form for forever for historical records, or can it still be altered? What would happen if a particular website was kept for records and then changed completely by its author, so its original intent is completely erased, and the website is thus irrelevant for the archives?
What do you guys think? Is the internet only there for viral, popular pieces of information–only to facilitate them as they peak and then fall away in popularity, or should we keep a massive record of every single click on the internet around the world? It’s certainly an interesting thought, especially when you think about just how many YouTube videos alone people would want to save…when will the Right Click–>Save As ever end?
28 February 2010 · 11.54 pm · by trip333 · No Comments
The discussion of machines performing repetitive and menial tasks to save human’s time and effort is almost as old as the discussion of the potential for self-examination which might expand the programs beyond the limited confines of static code. In New Media, Gitelman talks about a bot which was able to turn pictures into characters, and thereby make the New York Times’ archive searchable by an indexer on the World Wide Web. Of course the program was not 100% accurate, but its limitations were not in its ability to read, but in gaps in its programming for dealing with exceptions which might arise do to a wrinkle in the page, or a smudge of a letter. Extending from this, he examines the programming of understanding, the idea of having a machine do not only the tedious searching, indexing, or calculating, but also the thinking, and the other ramifications to our patterns of thought which the internet challenge. Anyone who has worked in photoshop understands the frustration of “I just want it to outline the PERSON” which seems like an easy concept to our minds, but which the computer is unable to understand and complete for us. Similarly, the internet processes require us to break down our patterns of thought and mechanize them in tiny chunks in order for a computer to be capable of performing them in varied settings. This has been important not only to the development of thinking machines, but to our own conceptualization of how we think.
28 February 2010 · 11.39 pm · by hzty · 3 Comments
Last Saturday night, Super Mash Bros played at Pitzer. For those who don’t know, Super Mash Bros mix music like Girl Talk. As a fan, I was excited to go and listen to some awesome mash-ups. Then I realized that only half of Super Mash Bros showed up. Now I understand that when you’re playing digital music you can double your revenue by splitting up if there are two of you. But it seems a little ridiculous to imagine if you went to a concert and only half of, say, Belle & Sebastian showed up.
It made me wonder about concerts and remixed music. What’s the point in paying a dj to come and play music if anyone can play the same mix of tracks of his album? I love going to concerts because some performers, when live, make the music into something completely different. But when you’re playing someone else’s music and it’s the same tracks, does it really matter?
Some artists (like Girl Talk), try and avoid this problem by mixing in new elements while in concert or playing new tracks. But while one half of Super Mash Bros tried to do this a little bit (mixing in the ubiquitous Party in the USA and Tik Tok to great cheers), for the most part his set seemed to be straight off his cd. I think digital music musicians are going to have to figure out a way to market themselves and their concerts better if they intend to be successful.
28 February 2010 · 11.35 pm · by saltaire · 1 Comment
After the most recent earthquake in Chile, Google has released “Google person finder” in an effort to “help people locate friends and loved ones who might have been affected by Saturday’s 8.8.-magnitude earthquake.” Google Person Finder allows users to search by name or leave information about who they’re looking for (in English or Spanish) for records. It is currently tracking about 35500 records. Note. Content is viewable and usable by all and Google is not there to verify submitted information but to act merely as a database. Google has also launched a crisis response page for those interested in “recent seismic activity in Chile, as well as resources to donate money to charities supporting the earthquake relief effort.” On top of that Google has a Mobile Giving Foundation for both relief towards Chile and Haiti. You can make $10 donations by texting the word “Chile” to any of the following numbers: 25383 (Habitat for Humanity), 20222 (World Vision), 85944 (International Medical Corp.), and 52000 (Salvation Army).
28 February 2010 · 11.25 pm · by saltaire · 1 Comment
New media is the “use of a computer for distribution and exhibition rather than production.” This definition is too limiting. “The computer media revolution affects all stages of communication … all types of media.” All media has potential to change culture. New media can be reduced to “numerical representation, modularity, automation, variability, and cultural transcoding.” The development of new media and the development of computers occurred about the same time, which is not surprising !
Media machines and computing machines “were absolutely necessary for the functioning of modern mass societies” if we ever wanted to see true development in any term of efficiency. Mass media and data processing are “complementary technologies.” You could say they are directly proportional. Computers are similar to cinematographs in that a camera is recording data onto film and has a projector read it off, while a computer also has its “program and data… stored in some medium.” Media are typically ”reduced to their original condition as information carrier.” As cinema moved away from film it became a “slave” to the computer. Media and computer meet with Zuse’s film – in its use of “binary over iconic code.” The computer in that moment becomes a “media processor.”
The principles of new media can be broken into 5 variables. (1) Numerical representation. “New media object” is able to be described “formally (mathematically.)” It is “subject to algorithmic manipulation” and can now be seen as “programmable.” Digitization (converting continuous data into a numerical representation”) can be broken down into sampling and quantization. This quantification of samples is “crucial for digitization.” ”Without discrete units, there is no language” between and within media. Assembly lines? Factory systems? Just like modern media. There is a standardization of parts and a distinct separation within the production process. (2) Modularity. ”The fractal structure of new media” means media contains the same “modular structure throughout” before being assembled into larger objects. These small structures retain their “identities” in this process. (3) Automation. ”Numerical coding” and “modular structure” allow for the “automation” of many new media operations: ”bots,” virtual “theater” and “actors,” computer games, even ”Al engines” and software “agents” for organization. (4) Variability. One type of media usually “gives rise to many different versions.” There is a response to a demand. It can be correlated with “social change.” It is similar to idea of “variable media.” Particular cases of the variability principle:
1. media database
2. different interfaces can be created from one database
3. user information can be used to customize the interface as well as create elements itself
4. branching type interactivity
6. periodic updates
7. stalability (generating different levels of detail)
(5) Transcoding. ”Cultural” and “computer” layers of new media influence each other and will result in a new “computer culture.” We are transcoding culture into the computer. There is a “conceptual transfer.” We must “turn to computer science” to understand this new media. New media is “analog media converted to a digital representation.” All digital media “share the same digital code.” New media “allows for random access.” Digitization causes inevitable “loss of information.” Digitally encoded media “can be copied endlessly without degradation.” New media is “interactive.”
The distinction between “new” and “old” media is blurry. Of course it depends on your definition, but I think it is implicated by cultural / social norms / expectations.
28 February 2010 · 10.31 pm · by trip333 · 2 Comments
This being the last week before the Academy Awards, I have been frantically trying to finish up my list of movies to watch. I have successfully seen all of the movies nominated for Best Picture and all but one for Best Animated Picture, but still have a few to go for Best Actor/Actress and such. The last of the Best Picture movie I watched was The Blind Side, and I have to say that I was not only impressed by the incredibly low quality of film-making and acting, but by the borderline offensiveness of how the movie handled a delicate issue of race. It really bothers me that the Academy selected this movie for potential awards. Sandra Bullock’s accent was poor, her character was ridiculous, and her lines were frequently poorly delivered. The horribly sappy emotional climax of the film has somehow made enough of an impact to put this movie above other contenders for the #10 spot like Watchmen or Where the Wild Things Are. Anyway. I hope it doesn’t win.
28 February 2010 · 3.10 pm · by pcef91 · 4 Comments
I don’t know if the rest of you have spent a lot of time messing around with Google Trends but I certainly have. Personally, it’s one of my favorite Google apps. Being able to manipulate and analyze Google’s search data is a really awesome opportunity and one which I hope they expand on eventually. Here are a few trend comparisons I’ve made which I think are pretty interesting:
And the most common misspelling of facebook is…
This one’s kind of eerie:
Now picture the graph of the NEXT social networking website:
Too close for comfort:
28 February 2010 · 2.52 pm · by 3sam · No Comments
This is slightly belated, but I realized because there was a lot to cover in the discussion on Wednesday, there were a couple issues we only touched on briefly or didn’t get around to discussing. So I figured I could post a couple points Bryan and I discussed when we were outlining the class facilitation and see what you guys had to think about them. One issue we mentioned but didn’t really follow up on was the idea of ending. Joyce instructs the readers of Afternoon to read until they feel an ending has been reached or they feel done with the process of reading Afternoon. We wondered if Afternoon can be considered a true narrative without some sort of true conclusion (although perhaps Afternoon achieves this in some way?). How did the lack of a clear ending and resolution affect your reading of Afternoon? For me personally, it made the story feel less driven and it was more difficult to motivate myself to continue because I didn’t know when the story would ever end. Reading Afternoon caused me to realize that I am driven mainly by a desire for resolution when I read fiction. Another issue that seemed unique to cybertext was one brought up in the Kirschenbaum reading. Though there were multiple editions of Afternoon, these editions were not demarcated in any way, which contributes to the difficulty in discussing Afternoon. How do discuss a text if it is unclear which edition is being referred to? Does the structure of these texts make numbering the editions irrelevant, and if so why is a new edition necessary?
Lastly, we wanted to discuss the future of cybertext. Is it fated to remain a fringe movement or is there a potential for mass appeal? What if Faulkner who had such innovative ideas he was unable to execute in print had access to this technology? Do you think that “great writers” would be able to harness these technologies to create more innovative texts?
28 February 2010 · 2.08 pm · by Kathleen Fitzpatrick · 1 Comment
After nearly 20 years of intensive email usage, without once ever having made the reply-all mistake, it finally happened. It’s more than a little mortifying that it happened in a class in which I’m supposed to be teaching Good Internet Practices and stuff, but, well, I suppose it provides the opportunity for a little object lesson.
What is that lesson? Well, “don’t be in such a hurry” might be a good one. Or “check the address line twice, especially if you’re working in a relatively unfamiliar interface.” Perhaps also “check the Google Group settings to be sure you really want every single reply to go to the entire group” (they’re now set in a way that gives you the option to reply to the message sender or to the group).
But mostly it’s just a reminder about how easy it is to slip up online and let private information wander where it shouldn’t. So, let it be a lesson to me, I suppose.
My apologies to all of you. See you in class tomorrow.
28 February 2010 · 1.59 pm · by starki09 · 1 Comment
What would the offspring of a web comic and a text based RPG look like? Chances are if would be horribly deformed and strange, but apparently, it would also be hilarious. I know everyone got a taste of MS Paint Adventures on Wednesday, but I thought I would talk a little more about it here for anyone who was interested. MS Paint Adventures is a web comic with a reader driven storyline—i.e. readers create their own actions using the suggestion box. While Andrew, the author, used to take the first suggestion that was given (This led to some weird plot twists, like this), he now is slightly pickier which gives the stories a bit more of a controlled feel (Although they are still crazy/strange). One of the main things which I found separated MSPA from other comics was the update schedule. While most Comics operate on a M W F update schedule, MSPA updates almost everyday. Not only that, but there are usually multiple pages per day—Problem Sleuth, the only finished storyline to date, was progressed over around a year and weighs in at an astonishing ~1700 pages. I think I got about 500 pages into it, maybe someday I’ll finish, but it’s scarily addictive (I’ve probably read 100 pages doing “research” for this post), and I just don’t have the time. What do you guy think? Is this a new direction for media, or just a novelty as we put it in class?
Here’s the link for Problem Sleuth.