Who gets to be a micro-elite?

peertopatent.jpg
Images source: Peer to Patent

A month ago, I heard Beth Novack from the New York Law School give a talk at the Symposium on Reputation Economies in Cyberspace. She is working on an interesting project called the Peer to Patent project, which is trying to incorporate peer review into the patent review process. She pointed to a (then) recent blog post by Adam Oram, on O’Reilly Radar:

“The idea of micro-elites actually came to me when looking at the Peer to Patent project. There are currently 1611 signed-up contributors searching for prior art on patent applications. But you don’t want 1611 people examining each patent. You want the 20 people who understand the subject deeply and intimately. A different 20 people on each patent adds up to 1611 (and hopefully the project will continue, and grow to a hundred or a thousands times that number).”

The concept of the micro-elite is interesting because it has characteristics of both a zero-sum and a non-zero-sum game. In that, anyone can in theory become a micro-elite, by picking a sub-genre (or perhaps sub-sub-genre) and broadening your knowledge base. Picking something obscure helps achieve micro-elite status. The problem appears if you want to become a micro-elite on a popular subject. Then, being one of the select few becomes more much difficult. Oram also mentions that this project also requires someone to have to go out and persuade the 20 experts to help out. However, what happens when you have too many equally skilled people who want to be involved? The term micro-elites by definition set a finite number of participants. The idea of crowd sourcing, if you will, the patent review process is a very interesting one. Peer to Patent is just starting out. I’ll be curious to see how it scales, how the collaborative efforts can grow, and if there is competition for participation in general or for specific cases occurs.

Posted in distributed working, information, innovation | Comments Off on Who gets to be a micro-elite?

Social Networks, Academic Rockstars, Micro-celebrity

almostfamous.jpg

Image source: Amazon.com

I love the idea nanocelebrity or micro-celebrity, where people are famous among a small group, but far from being a household name. Academic conferences are often a great place to find micro-celebrity. In the US, few academic cross over to the mainstream celebrity. Within a field, an academics can become rock stars, who have followers and detractors and can be controversial for their ideas. Their opinions can be widely cited and discussed in formal scholarship as well as on blogs and discussion groups.

At the recent Computational Social Sciences conference I attended and covered here, many of my social network theory rock star heroes  were speaking, including Lada Adamic from the University of Michigan. Adamic has done some important and early work looking into the link structures of the blogosphere.  In 2005, she “famously” looked at political blogs after the 2004 US Presidential Election, showing how most blue and red blog were far more likely to link to similarly minded blogs. Some of her visualizations made their way across the internet increasing her micro-celebrity status. The first time I met her (in the elevator at another conference) I didn’t even realize who she was. In the world of micro-celebrity, one’s ideas can be posted across the blogosphere, and can have the occasional pictureless quote in mainstream publications, adding to their street credibility among their small fan base.

Clive Thompson recently wrote a nice column on the subject and how Facebook status updates are like sending out press releases. He quotes Theresa Senft who is attributed with coming up with the idea micro-celebrity in the digital age, “People are using the same techniques employed on Madison Avenue to manage their personal lives.” In a networked society, information flows more freely and connections are more easily made. Groups of interested parties are form around people, through discussion forums and Yahoo Groups are Thompson sites. However, micro-celebrity can even be sliced into smaller facets.

Facebook allows anyone to be and operative as a micro-celebrity. It’s not uncommon for people who went to college in the Facebook era to have over 500 friends. On the occasional ego-check to see how many “friends” I have, I usually surprised to see how high the number is, because I don’t consider myself a power user. Digital micro-celebrity is replacing what was the traditions of the “small town.” The traditional “small town” with multi-generations living near by, if not the same street or house, fostered micro-celebrity. The only difference is that the micro-celebrities have a distributed network of fans, rather than local ones.  Small groups (of say, less than 1000) can easily form, some of the forces which motivate the formation of these groups are worth looking to into, and will be covered in posts to come.

Posted in information, micro-celebrity, social networks | Comments Off on Social Networks, Academic Rockstars, Micro-celebrity

Passages: getting close to interactive fiction

screen.png

Aleks sent me this link to the game “Passages” a couple of weeks ago, which also got picked up on the blogosphere. It’s definitely worth spending ten minutes playing the game. I’ll try not to spoil it too much, but some may want to play it first and then read the post.

Passages is getting closer to what I would call interactive fiction. Although Passages is a game, it has a narrative associated with it. The game play leads the reader/player through the process discovery, and insights from the author. The success of the game hinges upon having a point of view, which most games as interactive fiction lack.

The main challenge of interactive fiction is related to the idea that author has a point of view, which she is trying to convey to the reader. This leads to tree and branch narratives, where the choices seem contrived or obvious in the attempts to lead the “reader” down a particular path. Interactive narratives are getting closer. They still offer incomplete experiences because the reader/player always tried to do something not built into the game engine, which breaks the illusion. Games like Bioshock are definitely moving towards more cinematic gaming experiences, which takes game art direction to new heights. However, improving interactive narratives is not solely based on more complex decision trees, artful imagery or polygon renderings.

Passages is very simple game, stripped down to 8 bit graphics. Its compelling narrative and commentary on life, relationships, and work life it above other works. It’s a simple reminder that games as the fiction of the future will still need to have a perspective and something compelling to say. Otherwise, it will be remain delegated to the realm of genre fiction.

Posted in gaming, innovation | Comments Off on Passages: getting close to interactive fiction

Linking as a gesture of kindness.

thumbs.jpg
Image source: flickr

David Weinberger gave a description of a link in a panel last year at the Hyperlinked Society Conference. A link is a conscious act of generosity. These acts is moral, and they form the architecture of the web. He goes on to explain that the syntax of a link (i.e. the href HTML tag) has no meaning within itself, it is merely an instruction which points to another location. The meaning of the link, which can be agreement or disagreement, is found in the text surrounding the link.

While these links have no meaning, they do have value, which is the reason by creating a link performing generosity. Google ranks pages by the number of links other sites point to a page. Appearing early in a search result clearly has value over a later listing. You can only have a reputation if other people can find you. A page and her owner’s reputation then relies on the generosity of others linking to her page. If an author disagrees with the contents of page and wishes to dispute it, linking to the page adds to its value and reputation. The author is then left to not link. However, this practice which the status quo forces people to use still leaves the reader at a disadvantage.

There have been suggestions to create a newer kind of syntax and link taxonomy which would add to the current binary options of link or no link. The simplest system would be to have three choices, positive link, negative link and no link. This system would actually be very easy to for users. All you need to do is add a tag to the link.

Flipping forward one year, I was struck when Jonathan Zittrain pointed out in his talk last Saturday, the use robot.txt files for telling search engines not to spider a file or directory started in the early age of the web as an adhoc measure by individual which became an internet standard. Today, it is much harder to get a standard adopted, but the story of robot.txt reminds us that it is possible to create grassroots change in internet standards. Endorsement links allude to aspects of the Semantic Web, but frankly, I’m not sure if it will every come. Contextual syntax might evolve over time with gradual implementations.

The idea of rated links get even more interesting when you consider how search engines might use links that interpret reputation and authority. Of course, gaming the system would occur, but that happens now and should not deter the implementation of a link taxonomy. It might also encourage search engines to become open to annotating listings, as Frank Pasquale has suggested. Generally, search results are given by relevance or time of creation. New categories could be ranked in terms by agreement, disagreement or even controversy. The end result would be better ways for author to link, for readers to under the context of the link, and for searcher to usage links in the aggregate.

Posted in google, innovation, networks, Uncategorized | 2 Comments

Fidelity in Facebook.

soundwaves_300.jpg
Image source: USGS

Yesterday, Facebook was frequently mentioned at the Symposium on Reputation Economies in Cyberspace conference, but I’m trying to figure out what is the value my Facebook network.

For a free service, Facebook is getting expensive. Not just for Microsoft, but for the users who maintain their social network. Dealing with Pokes, Invites and Scrabulous take time, effort and bandwidth. As the popularity and membership of Facebook increases, the cost for not participating grows as well. Just as there are costs associated with not having a telephone or email address, the social and economic pressure to join these sites can be readily felt.

These Facebook clicks of “friendship” are simple gestures that replace deeper interactions. This form of communication is low bandwidth, in terms of data but more importantly of cognition. We are now able to easily increase our social networks in terms of reach, but the fidelity of our interactions within the networks is decreasing. Facebook seems to value to the size of the network, but not the fidelity of the links. The value of a network is not only the number of nodes, but also the quality of the information that flows through the edges between the nodes. Finally, the work of Ronald Burt suggests that there is value in having a network that is unique from those with whom are competing. The low fidelity of Facebook communications show a shift toward networks which have low costs, effort, and unique characteristics, which overall have less value than we suppose.

Looking at usage rates, it is becoming the preferred tool for many people, young and old. One of the main reasons Facebook is a popular because of its convenience. We are able to maintain these relationships, which seemingly take a minimal amount of effort. A simple click allows us exchange gifts, play a game, or say hello. It also increases the efficiency of the user by automating one communication effort across many friends. By filling one movie quiz, we are able to apply this work to all our other friends who answered the questionnaire describing how much they liked “Shrek.” In a way, it’s like sending out a mass interactive Holiday Letter, which is admittedly better than nothing, but not quite fulfilling. Nevertheless, these Facebook apps are extremely efficient for members who have hundreds of friends. We are able to interact with all these friends though one gesture. For many current and recent students, Facebook is an intimate part of their social experience. However it is successful because it is compliments real time interactions, of a past history of deeper real time interactions. These compliments could have been face to face or some other digital communication form that has more fidelity than a Facebook post of ten word sentence on a user’s Wall.

What is the meaning of these gestures? What is the meaning when an app is flawed, as in the case of the movie taste matching application, if none of my favorite movies are listed? What is the cost of forgoing communication with higher fidelity?

The adoption of Facebook shows our willingness to extend a network (adding nodes) in exchange for quality of information and even meaning. Facebook is an important tool for maintaining relationships, especially when a person in a friendship moves away, such as attending a different university after high school or leaving a job. Before, these ties may have dissolved, but now they have a longer lifespan. But how long can a purely Facebook relationship last within itself?

The beauty of Facebook, and one of the reasons for it adoption, is that we can fake friendship, which is to say, simulate a relationship with minimal work. Rather than actually having a meaningful exchange with a person, in a couple of clicks you can send the latest application interaction to her. But has anything meaningful been communicated?

We do not acknowledge the trade offs between a large social graph with less fidelity in communication and a smaller social graph with higher fidelity in communication. With high fidelity, and more information, the relationship and connection will most likely wither away. This trade off is important and often left unconsidered.

Posted in information, networks, repecon | Comments Off on Fidelity in Facebook.

I’m back and my brain is full.

I just got back from a couple of days outside New York. I had Friday off, so instead of Christmas shopping , cleaning house, and going to yoga, I went to a couple of great conferences. Yesterday, I went the Conference on Computational Social Sciences at Harvard’s Kennedy School. Afterward, I got on a train to attend the one day Symposium on Reputations Economies in Cyberspace hosted by Yale Law School’s Information Society Project. I looked around the people today to see if anyone else attended both. I’m not sure, but I may have been the only one.

I’m a bit tired, but there were some very good panels… and enough fodder for blogging to last me through the New Year.

Posted in social networks | Comments Off on I’m back and my brain is full.

Printing for the ages

pcam_front.jpg

After many years, I finally made it to a dorkbot meeting, a tech meetup before there were meetups. One of the three presenters was Ted Johnson, a great tech guy and overall hacker. He showed a handful of projects, but my favorite is his Instant Digital Camera. Hacking together a Gameboy camera, screen and calculator printer, he captures digital images (which was pretty decent resolution) and translates it into analogue numeric printing, which is reminiscent of ASCII art.

pcam_pnt1.jpg

The best thing about this project is how Johnson takes printing back in the opposite direction. The abstract rendering of forms by smudged numbers is a reminder how digital color printing’s “perfection” can look really soulless at times. At work, I always prfer the old HP black and white laser printer over the color printer. The HP produces crisp black type, which you can physically feel to the touch. The color pages come out slick and shiny, as if they were still on a screen.

However, I’m skeptical of this nostalgia. Growing up in the transition from print to digital text, my infatuation with the physicality of text may merely be a reflection of age. The physicality of printed text gives the illusion of permanence that digital text lacks. This psychological relationship we have towards that illusion is powerful, which is reflected in the tendency to downgrade online academic journals and ebook over their print countrparts. As the march toward digital text continues for reasons of both efficiency and sustainability, the question remains on what we will lose in the process.

Posted in information, innovation | Comments Off on Printing for the ages

Verizon set to open their wireless network in 2008.

logo_vzw.gif

Things just got really interesting in the mobile / wireless world. Verizon announced that they will over two categories of service by the end of 2008. One will continue to be its bundled handset service, and the other will be open to any device. This change brings Verizon Wireless in-line with the open networks that are available in Europe and Asia. The move will force T-Mobile, Sprint, and AT&T to consider offering similar services. (T-Mobile is already experimenting with allows users to make WiFi calls.) This announcement also may affect the upcoming 700MHz spectrum auction, as the FCC did not require open networks. I’m hoping that it will spur more innovation in the mobile space. Changes could happen quickly, we’ll have to wait and find out. I’m trying to stay optimistic.

Update: Techcrunch and GigaOM weigh in on the issue. A lot can happen in a year, and GigaOM is correct to be skeptical.

Posted in auction, innovation, mobile, networks, spectrum, telecommunications | 1 Comment

Controlling the Internet

internet_map.jpg
Images source: Wikimedia Commons, Matt Brim

The October issue of Discovery magazine has an article that piqued my interest, entitled, “This Man Wants to Control the Internet. And you should let him.” The man is Caltech professor, John Doyle, an expert in control theory. His field models dynamic physical systems, which includes things from a mechanical heart to space flight. The key idea is achieving a desired or steady state for one of these systems by taking current information about its state, and “feedback” that information to the system to make adjustments. These feedback system are mathematically modeled. When the system is non-linear and dynamic, for instance a airplane flying through wind currents, the mathematics required become quite sophisticated.

Doyle and his collaborator and fellow CalTech professor, Steven Low, have developed an improved protocol over TCP (or Transmission Control Protocol.) TCP describes how packets of data should be delivered and received over the Internet. FTP, email and WWW applications all rely on TCP. Using control theory, their protocol, FastTCPTM, clocks the time a data packet takes to get to a final destination and make adjustments to optimize its stream of packets. Standard TCP does not take this extra information into account, and relies mostly on a strategy of monitoring lost packets. That is, packets that don’t make it to the finally destination. In the 2006 Supercomputing Network Bandwidth Challenge, they won it with a maximum throughput of 17 gigabits (a full-length movie) per second.

Improvements to the Standard TCP will be important in the coming years, as multimedia services (such as movies on demand) will increase the demand of the current network. Already, VOIP services do not use TCP, because packets sent using TCP cannot be received and sequenced fast enough for real time applications like phone calls.

Doyle and Low, along with Cheng Jin formed the startup, FastSoft, to sell products based on FastTCPTM. However, they have trademarked their name and have submitted patents their technology. This is an important departure from the origins of the Internet, as no one owns that Standard TCP. Having to license or buy FastTCPTM from FastSoft has implications to the future of the Internet, which could lead to its fragmentation.

Last month, at team from Indiana Univeristy, the Technische Universitaet Dresden, Rochester Institute of Technology, Oak Ridge National Laboratory and the Pittsburgh Supercomputing Center won the 2007 challenge. They achieved a peak transfer rate of 18.21 Gigabits/second and a sustained transfer rate of 16.2 Gigabits/second. It is not clear to me what kind of IP, the team from IU has on their technology. However, the received funding from the NSF, which may mean place of some or all of their research into the public domain.

Demands for bandwidth are only increasing. A complete overhaul of TCP is years ago, and involves incremental change, because the network at stake (that is, the Internet) is so important, which Doyle explain the Discover article. How we meet those demands is already controversial.

Susan Crawford notes that Comcast is already traffic shaping bits, by flagging packets by people using BitTorrent. (She also has a nice description of TCP in this post.) Meeting this growing need, the network can improve performance in various ways including: upgrading the infrastructure, such as laying fiber optic cable; improving data compression algorithms, and improving the protocols that control data traffic. In all these areas, the ownership and regulations of these technologies have huge implications on accessibility and adoption of the Internet. Although the Discover article’s title “this man wants to control the Internet” is a play on Doyle’s field of study, it raises an important point. Having public and private protocols may not only make parts of the inaccessible to each other, but further increase bandwidth as another form of economic inequality.

I’ve been slowing making my way through a very good book “Innovation and Incentives,” by Suzanne Scotchmer from UC Berkeley. I’ll close with a quote from her chapter on “Networks and Network Effects”:

“The protocols of the Internet and worldwide web were developed at public expense and put into the public domain. Given what turned out to be at stake, that is probably one of the most fortunate accidents in industrial history.”

Posted in innovation, ip, networks, telecommunications | Comments Off on Controlling the Internet

My network is worth $1,195,537

networth.jpg

network_gallery.jpg

I guess that is something to be thankful for the day before Thanksgiving. Gigaom is linking to an Xing site, which calculates the value of your network. Mine is over a million dollars. You enter in some demographic information and they describe the size of your network, and the frequency of contact. Of course, a figure like how many people do you speak to weekly is very hard to estimate, which makes me question this numbers and charts really mean.

In the gallery section, you can compare your network value with others by country and industry and age (which is the horizontal access.) This clever addition makes it competitive, and vastly more sticky and viral. But I’m not sure why we are seeing all the peaks and curves. I’m not sure how many people have submitted to this, so it might a few people might be outliners and causing spikes. Getting more data points might smooth out the curve, I guess I’ll check back later.

Posted in social networks, work/life | Comments Off on My network is worth $1,195,537