August 15, 2008
Spreading the Love: Approaching a Participatory Distribution Model
By: Ana Domb Krauskopf
With 'You' on the cover of Time Magazine, Price Waterhouse Cooper recommending that companies 'embed consumers into your operations' and Henry Jenkins proclaiming that 'if it doesn't spread, it's dead' - it is safe to say that crucial to the new business landscape is the changing role consumers play.
The rise of participatory culture has given a voice to an interconnected, empowered consumer. Over the past year the Nielsen/OPA ratings saw a dip in the use of the Internet for communications and a rise in content use. In January 2008, they decided to include "community use" as a new category in their index. The importance of shared content and of social media is, at the very least, attention worthy.
Many advertising, production and public relations companies have surged around this phenomenon. Those who in the static/sticky model used to offer "Search Engine Optimization" are now proposing "Social Media Optimization" as a way for brands to insure their success in the world of social media. They are creating tools to embrace a spreadable model.
Marketers have adopted the idea of spreadability with less difficulty. In the content distribution world, however, the shift in the power dynamic between companies and consumers has caused tension and, in many cases, animosity.
Underlying this tension is the company's need to find mechanisms to generate profit directly from its content. Participatory distribution should benefit both users and consumers incorporating mechanisms that generate value for both sides.
To achieve this goal, it is necessary to build a spreadable media content distribution model that grows exponentially, is decentralized and depends on the interests, agency and - in most cases - the consumer's pre-existing social networks. (This is not the case for 'pull' distribution, as we will see later).
Of course, this means that the consumer can be user, distributor and even retailer all at once. By ceding this power to its consumers, companies are losing much of their control over distribution while they are gaining the value of each user's personal ties.
Designing monetizable participatory distribution ventures is a response to the way consumers are already behaving. It is important to remember that the reasons behind engaging in these types of transactions are different for the users and for the company. Even when acting as a distributor (hence 'acting on behalf of the company'), users do not necessarily only find value in the virtual good being exchanged, but find value in the exchange itself and the communities in which that potential exchange allows them to participate. Value is an evolving concept within these transactions.
The idea of participatory spreadable media distribution is closely related to the concept of "superdistribution", originally defined by Mori and Kawahara. It allows for the decentralized exchange of content combined with a usage based charge mechanism and a defense mechanism against interference. The basic idea is 'to take advantage of the Internet's infrastructure to distribute content decentrally and additionally create the possibility to monitor usage and charge the user accordingly.' (Ahrens et al. 2007)
Superdistribution has been studied and applied in computer science and systems analysis contexts, but rarely has there been any attention given to its cultural implications. The rise and necessity for this model is the result, not only of technological innovation, but also of significant cultural shifts.
Types of superdistribution
Kostamo et al. (2007) have identified three different ways of approaching superdistribution:
In the Direct Push model, the distributor is very active: the content is sent (given, passed, copied) directly from the distributor to the receiver. A good examlpe of this would be when someone receives a "gift" through Facebook. In this case the receiver is completely passive. In the direct push approach there is 'repeated, active participation, and often, intense interactions, strong emotional ties, and shared activities among participants.' (Gupta and Kim, 2004). The success of direct push depends on the intensity and proximity of these relationships.
The other push-oriented class of superdistribution is the Indirect Push: the distributor does not give the content itself to the receiver, but instead offers information about the content, i.e., the address where this content is located. This requires less activity from the distributor, but it also enables the receiver to search for this content or not. This type of distribution is less certain and will depend entirely on the receiver's interest, the relationship with the distributor and the potential quality of the content. Here users share a context of social conventions, language, and protocols but they don't necessarily have close relationships.
Finally, Pull distribution occurs when the distributor is passive and the receiver is active. Here the receiver becomes more clearly the seeker and must actively search through some kind of information system (the web, P2P networks) for the content. The reason for belonging and participating in these communities is a shared goal, interest, need, or activity. The personal currency these users generate is sometimes anonymous and hard to identify, but the community is sustained by 'reciprocity of information, support, and services among members.' (Gupta and Kim, 2004)
Josh Bernoff, Vice President of Forrester Research is sure that 'social media is recession resistant" as opposed to the Web 1.0 bubble. Within "social media" he includes blogs, YouTube and social networks. This surely points to the importance of developing tools that will harness the power (and profitability) of the spreadable media model. A space where the user is both consumer and partner, a space where the social nature of the distribution process is recognized and encouraged.
Ahrens, S., Hess, T., Pfister, T., Freese, B. "Critical Assumptions in
Superdistribution based Business Models - Empirical Evidence from the
User Perspective," Proceedings of the 41st Hawaii International
Conference on System Sciences. 2008
Ana Domb Krauskopf has worked as a journalist, producer and arts manager in Central America. She is now a second year student at the Comparative Media Studies program at MIT and a Graduate Researcher for the Convergence Culture Consortium. Her thesis will focus on alternative film distribution.
Glancing at the C3 Blog
Don't forget: you can read and respond to our daily articles and conversations on the C3 blog.
Revisiting 3D - A Phased Foray into the Third Dimension of Media Images (Part 2 of 3)
By: Stefan Werning
Last week, Stefan provided an overview of the technological development of 3D technologies, discussing the 'golden age' of 3D films in the 1950s, and the impact 3D technology and representation have had on film genres and technological prowess. This week he continues the discussion of the impact of 3D technology on the way we think about storytelling by considering the current resurgence in interest in 3D techniques.
Setting aside the heyday of early 3D movies in the 1950s, another, albeit overlooked, phase in the history of 3D media representation was the debate about 3D television formats as a potential new paradigm only several years ago. (e.g. Polisano, 2004) As McLuhan observed, after radical shifts in both media and societal systems, the previous state appears almost like a "game" in that it is ex-post regarded as artificial or 'staged'. The thought of 3D television as a widespread technology standard appears similarly surreal in retrospect since high- definition standards and screen sizes have come to dominate discussions about home cinema, at least for the time being and even though "3D-ready" TV sets are already available from companies like Samsung.
For that reason, the interpretation of 3D is still shaped by its paradigmatic application in theme park rides and simulations and, thus, marked by oscillating between spectacle and serious applications which constitute the current epistemological frame of reference for this technology. (2) The current preference for higher screen resolution, uncompressed images enabled through formats like Blu-Ray and complementary technologies like ambient TV lighting (e.g. the Philips Ambilight) analogously marks a different, contingent realism paradigm that affects the interpretation of films etc. using those techniques.
The relevance of a technological epistemology also comes out in the case of the increasingly refined algorithms to eliminate 'jitter' from discrepancies between the left and right image for the viewer. While formerly those technological imperfections arguably visibilized the mediality and signified the use of cutting-edge technology that 'guaranteed' the highest achievable level of visual realism, the current ideal is to eliminate all 'traces' of the shooting process. In a similar vein, live actors constitute a characteristic 'problem' in stereoscopic feature films which started with easier-to-produce animations since the shooting process needs to be as error-proof and comfortable as possible to avoid disturbing the actors' performance; since the audience is increasingly aware of these circumstances, actors themselves become 'special effects' (similar to 'hyper-real' design strategies in earlier digital VFX movies such as the 'molten metal' surface of the T-1000 in Terminator 2) rather than the 'opposite' of special effects as which they are discursively positioned in regular films. (Greene, 2008)
Moreover, 3D technologies similarly have a strong impact on the apparatus and thus on the viewing experience. For instance, with the Sharp research model for 3D television displays, users had to sit almost directly in front of the TV screen (50cm to either side) while other models sacrificed screen resolution for increasing the viewing angle (by projecting more than 2 pictures) or even tracked the viewer through motion-sensing to adapt the projection angle which, however, only worked for one viewer at a time. (4) Similar 'coercive' effects had earlier been imposed by stereoscopic goggles as a tangible intermediary 'interface' and, even earlier, by the uncomfortable viewing position using stereoscopes for still images.
These effects on the viewing disposition that viewers had been tentatively accustomed to have epistemological implications as well and, along this axis, create technologically motivated continuities with other types of applications such as e.g. BMW adaptive car headlights which 'recognize' the viewing direction of the driver by extrapolating from a number of variables obtained from sensors. Thus, even though a universal standard has not yet been achieved for 3D television, 3D media representation can be aligned with other technologies on the basis of technological convergence and feeds into a multi-layer model of interdependencies that ultimately drive and shape further developments; simultaneously, this convergence can affect the user's stance towards key technologies such as, in the latter example, the notion of partially submitting control to an AI instance and, more generally, of 'sentient applications' tracking user behavior which is already 'learned' through the multifarious instances of AI control within digital games (e.g. context-sensitive guidance systems analyzing the situation through low-level AI routines).
This development is mirrored by recent research into advanced three- dimensional representation in digital games which similarly uses quasi- AI algorithms to 'observe' and interpret user behavior. For instance, Johnny Lee demonstrated the ingenious use of the Wii controls to enable "Head Tracking for Desktop VR Displays using the Wii Remote" by inverting the usual positions of sender and receiver in his eponymous and immensely popular video documentation. Sony took up the ball, first through a Sony of America programmer using the PlayStation3 Eye camera to perform the head tracking, and announced to distribute the recognition algorithms for mash-ups and even commercial use. Later, Sony itself continued towards commercializing the technology which in its current form even differentiates between multiple viewers, a problem not yet solved by experimental stereoscopic TVs. Thus, from this angle, the utopian notion of 3D representation, now more than ever, appears to catalyze and bundle the 'subversion' and playful overstraining of hardware to create new types of applications besides the gradual improvement in 'mainstream', cinema-driven research. At the same time, the fundamentally different technologies involved in creating real-time as opposed to recorded 3D media content widens the perceived epistemological gap between both, even though the differences in terms of visual realism such as texture detail and shaders are becoming gradually harder to discern.
From the 1940s through the 3D boom in the 1950s up to its decline in the 1960s, the 3D era was thus enabled by various, partly overlapping developments which came together in an opportune constellation; similarly, the latest developments need to be seen as a conglomerate of various coalescing trends and not as the single initiative of individuals like Cameron as which they are often communicated.
One of these layered developments at play is the systematic elaboration of distribution techniques, i.e. particularly digital distribution. Cameron decided to re-visit 3D projection upon noticing that the first digital projectors which were supposed to replace 35mm film offered high frame rates that allowed for displaying the left and right image sequentially rather than simultaneously at a sufficiently high frequency to create the impression of one 'moving image'. (e.g. Cohen, 2008) Cameron himself points to an effect of this assumed 'layering' by indicating that the progression of digital cinematography itself has, in turn, been tied closely to and "catalyzed by" the momentum of 3D movies.
Thus, aspects of distribution constitute an important design/ production contingency which, however, has not yet been systematically investigated in the context of media studies. While media products are usually interpreted irrespective of the respective means of distribution, it appears worthwhile (as e.g. Hartmut Winkler suggested in his Discourse Economy; cf. Winkler, 2004) to consider distribution processes as meaning-making or -modifying structures. In media studies hand books, distribution is usually conceptualized as either analogue or digital (cf. e.g. Hartley et al., 2002: 69-70) and, thus, brought into a dichotomic structure which appears unfit to accommodate the numerous varieties of distribution contexts.
According to this model, analogue information is transmitted as a "continuous wave" while digital information is noticeably encoded and decoded which blanks out instances of 'encoding' in the case of analogue distribution. Moreover, analogue distribution is described as giving 'better quality" and being "more true to the original" while digital information allegedly can be replicated without quality loss and is either transmitted ideally or not at all which facilitates systematic product piracy. While these criteria are very schematic, they can provide clues for a typology of distribution scenarios to elaborate on how distribution changes the implied 'status' of a media text, for instance given the very basic distinction between streamed as opposed to downloadable video content. I am elaborating on these aspects in my habilitation treatise.
Stefan Werning works in product development for Nintendo of Europe. He recently finished his PhD at the University of Bonn, Germany. He has written on topics ranging from e-learning solutions based on digital games, modelling terrorism in recent military policies to interactive media analysis.