CS 376

This is a repository of thoughts and responses from the CS 376 readings:

6/2/15:

* Postcolonial Computing: A Lens on Design and Development* link

  • This paper provides a refreshing and broad view that addresses HCI and design from a global perspective. Though lofty, the ideas presented here are necessary for designing a digital universe for the masses. It's true that the world is growing more connected as the web has brought distant communities and individuals together.
  • I particularly found the discussion of economic models very interesting. Especially in today's world the economic rift between different groups have become more and more apparently. Looking especially at Palo Alto and EPA, we see how economic incentives can play a role in the interfaces that different socioeconomic groups respond to.
  • I wanted to comment also on the point brought up by the paper on the topic of open-source software in Peru. I'm of the opinion that open-source software won't stimulate an economy. Valuable IP become the livelihood of small companies (especially start-ups) as they try to disrupt an industry. If corporations are forced to open-source their software, innovation is stifled and new developments get captured first by larger corporations with more resources.
  • I would argue that especially in a multifaceted and diverse community like Stanford's learning and teaching design in cross-cultural ways is even more important. I'm glad this reading is one of the later ones because it puts in perspective how broadly design can be defined and understood across many different groups of people. as designers, we're responsible to make sure that our works are cross-cultural and this work reminds us of that.

Yesterday’s tomorrows: notes on ubiquitous computing’s dominant vision link

  • It's sobering to realize that for the societies that benefit from ubiquitous computing, there are still a number of societies that sufer to even provide public wi-fi for their citizens. The article discusses disparities in third-world countries and I agree that steps can be taken to better level the technological disparities between third world countries and our first-world counterparts. More resources and attention towards fields like edTech and mHealth provides avenues that the traditional tech student can get involved in making a difference.
  • I agree with the paper's claim that we already live in a world of ubiquitous computing. I enjoyed the case studies that the paper presented of different cultures. It's interesting to note how the infrastructure that these foreign governments provide lead to the adoption of ubiquitous computing. I certainly think that the different perspectives that Korea and Singapore have on privacy probably played a conducive role in getting the public to adopt and approve of such infrastructure.
  • The article claims that "messiness" is part of the inherent nature of such ubiquitous infrastructure. In the context of such large data sets and the complexities of today's massive networks, I can certainly see how this is the case. It certainly seems to be a misconception that the "ubiquitous world of computing" is expected to be without inefficiencies and starkly different from the world we live in today.
  • However, I am aligned with the author's claim that we are already living in it. Perhaps our bar for such a world is too high, but the affordances that we already have in today's devices are already evidence of such a world. Google's Nest and the Apple Watch all point to examples of connected devices that have brought us to a world where technology has started fade into the background of our day-to-day lives.

5/26/15:

Content-based Tools for Editing Audio Stories link

  • At a higher level, this study explores the utility of a tool to augment creativity. This is an interesting question because it involves a necessary definition of creativity and what changes when such a tool is applied. Do we become more creative when we're able to consider and assess many pieces and combinations?
  • At the core of this question lies the motivation behind creating such tools like Photoshop and Garageband. When we're able to accelerate the process of creating, can we argue that we are more creative? Or simply more effective creators? I would have liked to see some more discussion of these questions in the paper, but perhaps we can bring this up in the class discussion.
  • As a system-based study, the technological overhead to combine results and functionality from multiple APIs and interfaces was an impressive feat. I know how hard syncing the timing of different threads can be especially from a web development standpoint. So I can appreciate the complexities involved in the multiple interactions required to edit and then resynthesize multiple tracks.
  • Absent from this study was the process of needfinding that I would have expected from a system-based study. I'm also surprised that researchers didn't show evidence of prototyping and building out different hypothesis that may have been part of their process in designing their final interface. In our CS376 project, the system we built involved multiple iterations and paper prototypes to arrive at the interface we eventually built out. I would have expected a publication-worthy system to present the same level of exploration and depth.

Let's Get Together (Formation and Success of Online Collaborations) link

  • This paper presented some conclusions that were weakly tied to the data given. I felt that this study had potential to be data-driven but the authors fell short on designing a study that gave structured data. In particular, findings of negative predictors was unfounded. It would have made their conclusions stronger to see some of the findings backed up by concrete data.
  • I think an interesting parts of this study would be the social impact that collaborative songs have on the network itself. Unfortunately, the study didn't focus on this aspect. But I would have liked to see how collaborative pieces fared in the social currency of the network itself. For example, do collaborative songs end up being discussed more and by more people?
  • Some of the principles used in this paper have been executed in collaborative classes here at Stanford. In CS210, we use a "work-effort" to peer and self-evaluate our individual contributions and "effort" put into the group project. In CS147, we also incorporated peer reviews of team projects.
  • It's my opinion that self-evaluation, especially in the context of a study, cannot be the most accurate. I think if the question was perhaps more indirectly posed, data might be better distributed to avoid biases in self-evaluation.

5/19/15:

Beyond Being There

  • I was not a huge fan of this paper. The conclusions and the hypotheses formed from the study weren't data-driven. In the end, it just felt like a lot of the paper was just talking about how the authors felt about certain lofty concepts. I would have felt their argument for abolishing the concept of "being there" to be stronger if it was backed by data.
  • Let me play the devil's advocate and attack the paper's fundamental argument. I argue that it is entirely necessary for telecommunications to focus its efforts on solving the "problem of distance." I claim that distance is a fundamentally broken part of communication that should be solved. If we can agree that the primary goal of communication is to exchange information and experience another's presence, I would argue that distance is the problem that keeps these goals from being accomplished when two people are not physically in the same location.
  • This being said, I would argue that it is enough for telecommunications to try to virtualize and "imitate" the current and traditional ways of communications. Efforts to create a virtual reality and replicate the face-to-face interaction should be applauded and rewarded as solving an important and necessary problem of connecting our world.
  • Lastly, I'll make a case study about Slack and why teams seem to use it so much. In line with this paper, Slack leverages the ability to form ephemeral interest groups through the premise of channels and private groups. By banding together people of similar interests with a common goal, Slack is able to accelerate the communication and funnel information in large teams to the right people. Beyond being there, the problem Slack is solving is to help the right people communicate quicker without the need of being in the same room.

Social Translucence link

  • Again, I'll play devil's advocate because I don't necessarily agree with the conclusions that this paper came to. I make the argument that providing information about people's activity presents more noise than signal. Wouldn't it be better if our decisions were motivated by objective, clear statements about what a system is doing rather than how users are interacting in a system? I'll support my argument with my own interpretations of current "social" systems.
  • First, let's analyze the effectiveness of Spotify's social network. I make the argument that Spotify's social network is not conducive to its main functionality of allowing users to listen to music. If we were to show my friends' activities on the site it would not add any value to my search for playing the music I'm looking for.
  • I argue that other social networks that have networking and graphs of people as part of their mission statement are much more in line with this paper's argument. I'll cite Facebook as an example. With the mission of "Connecting everyone, everywhere," I argue that their social network is necessary to accomplishing their mission.
  • Perhaps social systems don't always need to be translucent. I argue that it's perhaps more effective to have systems be partly opaque and then completely transparent if the user is intrigued in learning more of what's behind the opaque barrier.

5/17/15:

Information Needs in Collocated Software Development Teams [link]

  • As a software developer that's worked on different teams in the industry and through school projects, I can definitely say that a few aspects discussed in this paper differ depending on the environment you're in. Emphasis on good practices, code reviews, and consistent style in production-quality code streamlines some of the processes that are more of a problem in school. when projects are extended over large periods of time, the overhead to keep code consistent and organized is definitely worth it. It's also true that the industry standard of code requires collaboration given the cross-pollination from different teams.
  • Back in 2007, I imagine GitHub wasn't the tool of choice for collaborative programming at the time. It would be interesting to revisit aspects of this study to see how much GitHub as a tool has impacted the way that programmers interact in teams. The way that GitHub organizes and presents information makes it easier for code reviews. In-line comments, commit messages, and the paradigm of reviewing and merging pull requests have become the industry standard of managing complex tasks.
  • The way the study describes getting blocked on issues have started to be addressed by common industry practices like Scrum and daily stand-ups. Systems like JIRA and even GitHub issues present ticketing interfaces that allow teams to track the progress of a blocked issue. I would think that a large subset of the problems faced in these issues would be addressed by modern systems like this and that the average frustration of the programmers would be lower.
  • I wonder how accurate the self-reflection methods were in gathering genuine feedback about the different evaluation questions. In my opinion, some of their questions didn't really have a good scale on how one could rate the decision-making process. I also personally didn't resonate with some of the prompts (e.g. "What is statically related to this code?").

Emergent, Crowd-scale Programming Practice in the IDE link

  • I was a fan of how this paper was able to provide valuable deliverables to the programming community. The frustrations and pain points felt by the novice Ruby programmer is definitely something I can relate with. In this sense, statistical linting makes a lot of sense. Sourcing and seeding well-defined and well-characterized idioms would definitely be a valuable tool and resource that would save a lot of time.
  • The paper presented crowdsourced review of the Ruby paradigms as the crowdsourced portion of the study. Arguably, the open-sourced repositories that the researchers sourced code from is perhaps an even larger and more elaborate fraction of crowdsourced knowledge. In the open-sourced community, the knowledge of more experienced programmers definitely trickles down the pecking order. There are also community managers that maintain the state of projects (e.g. Rails) who take great care to standardize and maintain consistency.
  • I think the paper probably left out a lot of the challenges they might have run into with the project. With the enormous 3M corpus that they were using, I'm certain they must have run into signal-to-noise ratio. I would have been interested in ways they tweaked their parser and logic to eliminate the noise to distill more defined datapoints.
  • All in all, I'm in line with the paper's stance on closing the feedback loop between programmers and their own knowledge and style of programming. I agree that the idea between such an IDE would present ways to steepen and accelerate the learning curve for idiomatic languages like Ruby and Python. I think if the interface and feedback paradigm can be polished (think XCode's autocomplete features), programmers would really take to such a technology.

5/5/15:

Sketching Interfaces like Krazy [link]

  • Why do we prototype? Let me play devil's advocate for a bit. I'll argue that this depiction of the design process is too fixated on user testing and that users don't necessarily know what is most effective or conducive to the best interface. True innovation and perhaps the most influential designs comes from interfaces that toy with the border of what's familiar and what's non intuitive. Indeed, it was Ford who designed today's form of transportation that said

    "If I would have asked people what they wanted, they would've said faster horses."

  • SILK claims to combine the best of paper sketches and the merits of electronic tools. However, it's not clear whether the automated recognition featured here in this system truly provides more facility to the production of well-designed sketches. I would argue that it would make more sense for designers to submit sketches and then specify which elements of their sketch should be interactive. For example sliders and buttons would be given independent axes of motion according to the designer's discretion.

  • The description of the tool presented here belies a lot of interesting Computer Vision work that happens behind the scenes. The paper admits that recognition accuracy is low, but it's there's merit even in the elements that the computer was able to pick up. I would have liked to see more discussion on the training sets and the models that the researchers used to create its classifier.
  • The paper claims that today's UI tools lead desigeners to fixate too much on the details of design. This leads to the paper's interesting investigation of how to design effectively for designers. If the goal of designers is to iterate quickly, then this tool is undeniably effective. However, if the goal of designers is to gather user feedback, this tool fails to cater to that aspect of design.

Voyant [link]

  • I wonder if this tool perhaps fixates on only specific types of feedback. Perhaps only the feedback that can be distilled in a text format is gathered. It might be more interesting to see what information can be extracted from more augmented aspects of user interaction. For example, tracking the eye itself as the user looks at a design (Ward, 2003). Or other forms of feedback from the user's reaction that can't be framed in the context of text.
  • The heatmap of salient points in the image suggests a lot of interesting next steps that combine computer vision and interaction design. For example, training an algorithm to recognize and build a classifier to recognize well-designed elements that draw user attention. Such a classifier then could be driven to recognize and build "well-designed" interfaces autonomously. Think grid.io for interfaces.
  • The way that Voyant collects feedback seems to parallelize the collection of feedback while the user is still processing the image. I wonder if feedback, specifically the gut instinct that people have, might be better collected if the user is forced to jot down first impressions after being flashed the image . The instant feedback loop, in my opinion, would be more conducive to receiving the split-second impressions of the design rather than the drawn-out duration of the user feedback that the current interface provides.
  • In the case of interfaces built for specific types of users (say a grading interface for teachers), I imagine that there may be non-intuitive use cases and goals that the typical non-expert will not pick up on. The crowd in this study is derived from "non-experts" that may not understand the interface in context of the task that they have to complete. Because of this, I would argue the interface may need to either weigh "experts" more heavily in their feedback or create a better framework for helping users understand what context the interface should be used in.

4/28/14:

Games with a Purpose [link]

  • Van Anh makes a strong argument for the value and motivation behind creating large, scalable interfaces that encourage collaboration behind a single task. I agree to a certain extent that this is valuable, but think there is a point where machine learning should take over. It's true that crowdsourced knowledge is a good basis for seeding and providing accurate annotations or descriptions that computers can not computationally derive from scratch. However, I claim large-scale problems shouldn't be solved by breaking them up into repetitive, individualized chunks - in this day and age, we should strive for equally scalable autonomous solutions that are built upon human knowledge.
  • Gamification is a hard problem that always exposes a research project to risk. The first difficulty in reframing research problems as games involves ensuring that the resulting interfaces are guiding users to the same goal that the original research was trying to solve. The second is ensuring that the game attracts enough user interest as momentum to actually solve the original large-scale problem. Tension in balancing these two goals is a third complexity and it remains the researchers' responsibility to ensure that a sweet spot is hit for a game to be effective.
  • The insight behind gamification involves analyzing existing incentive structures and trying to generalize or synthesize approaches in an attempt to solve a focused problem. Here, the authors are applying the phenomenon of ESP to the game they describe. In hindsight, it's clearer to see how users would gravitate to a game that seems to be dependent on a mysterious "psychic" ability. Furthermore, the implicit tension in anonymous collaboration provides a social tie to the stranger that is "working with you" to arrive at a solution. These motivations all belie the original task of image annotation which is the genius and the insight behind such a solution.
  • I argue that these insights are much harder to develop. We're facing a similar problem in developing an effective gamification for the VisualGenome project. Though the basic premise of the problem involves sourcing annotations and creating "bounding boxes" around particular parts of the image. There is an unsolved hurdle of how to effectively motivate users to converge on a salient description of these bounded objects within the image - concerns based on accuracy, annotation salience, relevance, and redundancy must be addressed for such an idea to be effective.

Expert Crowdsourcing from Flash Teams link

  • Crowdsourcing is a powerful concept that has only recently been possible by the structures of online communities and open APIs that allow researchers and developers to hook into the power of the crowd. The application here to bring insights from organizational behavior research into an elastic and dynamic work task is promising. The idea that teams can focus and hone their productivity given a target timeline and work on-demand with a spontaneous team remains to be tested at scale. However, this paper provides an excellent overview into the system and a proof-of-concept example that describes their contribution to HCI well.
  • Two issues are left to be fully addressed in this paper. It's true that there's value in collaborating spontaneously to generate and iterate upon an idea. But it's clear that flash teams fail to bring continuity and consistency to a project. With such rapid and discontinuous results passed from team to team, I doubt that such a process would be successful in projects where domain experience and context is required. In other words, it's possible that the ramp-up time to get up to speed on the project overshadows the time that is saved.
  • Secondly, the overhead in facilitating handoffs is likely heavier in real-life and variable for different problems. Indeed, foundry provides a space and encourages effective workflow via continuous blocks and linked connections that represent input and output. However, these inputs and outputs are not well-defined for more non-technical projects that may revolved around market research or constructing business models that are dependent on effective handoff of information and insights that may not be well-formed in such a dynamic workflow.
  • Crowdsourcing is built on infrastructure that first incentives and then rewards workers for good work. I argue that existing structures like oDesk and Amazon Turk still fall short of effectively solving this innate problem. First, the incentive structure is still based on self-described skills and peer reviewed ratings. There's often a disconnect in the expectations of an employer and the perspectives of the employee in projects that fail to be addressed in the context of such an anonymous and online system. Unless there exists a more grounded medium of communicating clear guidelines and evaluation metrics, I suspect that projects communicated over such mediums will such an infrastructure will continue to fall short of expectations.

4/26/14:

Using social psychology to motivate contributions in online communities [link]

  • This paper described a well-designed study that made great use of the experimental process in my opinion. The independent and dependent variables were clear and led to a distinct finding that the HCI community could benefit from. It's of my opinion that the study could have investigated one more control group to give a more distinct baseline to the results found. I would have liked to see the effects that just a single email had on the activity of the site. Devoid of any mention of motivation, it would be easier to track how effective an email message is itself.
  • This study didn't address one particular group of users that simply consume content. I would argue that the concept of social loafing only applies to active users that actually contribute to the community. In online communities like RottenTomatoes and Quora, a significant subset of their users make little to no contributions to the community. Here, a distinction needs to be made for users that only consume vs. users that actually contribute.
  • Building on this thought a little more, I think there exists ways to motivate people to make the transition from the consuming-only user to the regular contributor. If I were to use myself as a case study, it's effective for user motivation when an interface or a community uses users themselves to call others into action. Judging from my responses from a HealthTap email reminding me to complete my profile to a direct message request from a fellow Quoran, I certainly feel more indebted to my fellow user than the HealthTap script that generated that email reminder.

    Much as low-fidelity prototypes provide a low-cost way to test interface designs, these email interventions provide a low-cost way to test the impacts of manipulating relevant mental states.

  • The paper recognizes this point, but it remains to be discussed how best to design interfaces that actually inspire such motivation in an online community. This, to me, is the real challenge. How can we design implicit reminders of uniqueness and value into the interface?

The Anatomy of a Large-Scale Social Search engine (link)

  • I'm a fan of this study and the concept behind Aardvark that spawned what seems to be a vibrant community and an acquisition by Google! Digging more deeply, we should wrestle with the question of whether this platform really is a search engine or a glorified IM interface. It's true that the algorithm involves searching to find the appropriate user to glean an answer, but this sort of social search isn't based on information but on the activities of a user. It seems to me that queries to such an engine must provide "heavier" requests to the server given all the tagging that needs to be done based on matching user activities.
  • The user studies that they performed led to interesting insights about user participation and the satisfaction that users received after answering or receiving an answer. In particular, the experiment that the researchers performed in comparing Google and Aardvark led to an important distinction between the two in finding answers to subjective and objective questions. Though the paper attributes Aarvark's success in the subjective space due to the level of trust in another person, I wonder if trusting in a community such as the last paper would yield a more tapered response that normalizes extreme or biased answers.
  • In comparison to such a search engine, we should consider such mediums of forums and content aggregator sites like Quora as alternatives to providing specialized information that has been crowdsourced in content and "quality". Forums and the Quora certainly have social elements as well, but in what ways are they different from Aardvark? I argue that there's value in the crowd to provide some fuller insights and value in also specializing and aggregating multiple responses in the same space, but it would be interesting to dig deeper into the pros and cons of all these interfaces.
  • The probabilistic analysis for this search engine is particularly fascinating, and I wonder if the authors ever attempted to model some of their calculations as a directed graph rather than a Baysian net (the authors call it a Social Graph, but I see no evidence of weighing the edges in their ananlysis). I would personally be really interested in seeing how the connections play out in the form of a social graph, and would love to see what weights features like "Chattiness Match" "Verbosity Match" all have on the graph.

4/21/14:

Led class discussion following a lecture by Stuart Card on the topics of :

  1. Understanding the Natural and Artificial Worlds (Simon)
  2. Technology and Phenomena (Arthur)

See discussion slides or reading notes

4/19/14:

Cognitive Engineering Models [link]

  • The paper provides a good abstract model of human interaction with a generic interface. The paper fits well its framework of a psychological study that analyzes and models a human processor. I like how the paper applied traditional psychological analyzes to the "machinery of human cognition" - it definitely read very closely with the way traditional psych literature analyzes human behavior.
  • If this paper can analyze such tasks with such granuality in the context for humans, how would we compare to computers? I would imagine that computers can perform all of these "tasks" much more effectively and quickly. Does that make computers better workers than us? Is there value in the way humans can adapt to changing scenarios and edge cases that computers may not be able to?
  • In contrast to the other paper, this paper gives much more depth to its quantifiable measures. I appreciate the framework that the paper uses to describe the interactions humans take with interfaces. In particular, the granuality that the paper takes to build intuition behind a user's uncertainty model took multiple factors that I didn't consider into account ("equally probably alternatives", "information-informatic entropy", etc...).
  • Pirolli measures task completition quite effectively, but doesn't provide much depth or analysis to more subjective tasks such as identifying what decision to make and how such a decision is engineered from the stimuli of the environment. I would have liked to see less details about GOMS models and more details about the decision tree that laid the foundation for such decisions. The paper acknowledges that as HCI tasks grow more complex, models may become incapable of describing the nuanced ways that stimuli is processed into an action. However, an attempt to address more of these nuanced and subjective behaviors would have been appreciated.

Exploring and Finding Information [link]

  • I'm not convinced that shopping websites are the appropriate parallel to make given the author's focus on information + food-foraging parallels. Food as a resource is innately tied with a sense of the community/family a forager feels responsible to and the natural survival instinct; whereas, our shopping interests tend to be more individually tied and scoped within our personal preferences. E-commerce sites are much more with targeting/guiding customers towards specific products rather than the resource-rich wild that engender flight/fight responses. I would've felt like a navigation metaphor - the human ability to wayfind and journey across distant lands - to be a much better metaphor in this sense.
  • Though the study was limited by the technology of its time, two arguments point to why its findings may not apply to today’s interfaces. Visual elements like tabs, fixed “smart-tracking” navbars (i.e. scrollspy), navigation, etc… Secondly, mobile and web interfaces have been benefitting from the advent of Bayesian classifiers, and personalized big data, that improve the relevance of search results and next steps. That being said, text is still the primary medium through which information is conveyed and the study does provide relevant findings in that way.
  • Specifically, the study verified that the information sense curve shows the number of relevant documents decreasing at a slower rate as the user continued to find more relevant documents. Findings that suggested that users scan titles, appreciate relevance, and optimize their search for information, were not well quantified in their review. Additionally, there weren’t particularly practical or well-formed action items that the study seemed to suggest that would help designers and developers assist their users in getting the most value out of their interfaces.
  • I liked how specific the paper outlined the user flow for accomplishing specific tasks. The production rules fit in nicely with the framework that the model took with production memory. I would have appreciated more rigor in the way that the paper described weights and associations between chunks in memory. It would have definitely been interesting to investigate the paper’s discussion of a “spreading activation network” under the lens of a directed graph.

4/14/14:

inFORM [link]

  • This is a fascinating piece of work that almost doubles as an artistic statement. I'm impressed and greatly appreciate the clear efforts of the hardware specialists that made this study possible. I'm also a fan of the technical upper bounds (1.08 N/pin, 0.968 m/s downward speed) that defines a capped, but large space where the researchers could explore interactions. An interface such as this is certainly no small task, and I'm glad the researchers opted to include all the details from back to front that allows readers to really get a clear sense of what they built in addition to why they built it.
  • The interface designed here combine the tactile, visual, and the actual 3D velocity (speed + direction) that the interface produces itself. This brings the number of variables to at least 7 in comparison to the color + x/y dimensions that we use on our phones. These additional variables contribute to an exponential growth of possible interactions. Forget swiping, pincing, and four-finger swipes, there are boundless unexplored forms of tactile affordances in this interface prompts.
  • What if we imagine if this interface was more granular? Let's do away with 900 pins and imagine 9,000,000. The applications for such technology in spaces like architecture, structural biology or even game design. Touch screens would turn into touch cubes that add all sorts of dimensionality to complex spaces that struggle with a 2D representation. Even learning physics from an interface would be incredibly insightful.
  • The one thing that I found this paper lacking in is in the disconnect I felt between the paper's focus on "moving passive objects" and the connections such an interface could make to existing designs through visual means. It's clear that the wow factor of such an interface lies in the motion that it provides to these objects, but I had expected more connection between this work to the current interfaces that exist for now-visual tasks. The paper mentions issues with "jarring" its users with unexpected motion. Instead of trying to make these motions more smooth - why not incorporate the already understood visual affordances that our screens present.

Multi-Touch Systems that I Have Known and Loved link

  • I enjoyed the historic commentary that Buxton presents. The timeline provides definition to a framework and a context to understand the context of today's touch systems. Understanding where our interfaces come from definitely gives insights into what has worked before and why we've moved in the particular directions that we have. Though sites like pttn provide great depictions of our visual interfaces, I would love more discussions of our touch interfaces such as the commentary that Buxton provides.
  • The Bi-Manual input in particular provided an interesting insight into parallelism. I took a brief skim over the paper that Buxton linked to understand how two-handed systems (think gear vs. steering wheels for our cars) can lead to more effective and parallizable interfaces for users. It's definitely understable how much more steep the learning curve is, but I argue that it's this sort of "steepness" that allows the knowledge and skill to stick as a habit. How easy is it to pick up driving again after months of inactivity?
  • I though the conceptual thought excercise present in Touchlight was really interesting. Though the link doesn't seem to be working, the work alludes to an interactive interface with high potential for extensibility. With multiple degrees of freedom in number of users, gestures, and even the visual representation, I would be really interested in seeing what visual and gesture affordances came out of that study and how we might be able to replicate them in the interfaces we have today.
  • I was a fan of the way that Buxton broke down all the variables that go into touch interfaces. It really got me thinking about how extensible our current touch interfaces really are. It's not immediately apparent the variables and points of data that our phone screens can synthesize to form a larger and more complex pictures, but Buxton's claim that "there's more to touch-sensing than contact and position" definitely deserves some deeper thought before designing the interfaces that we interact with on our phones.

Tangible Bits [link]

  • This paper relates to a TED talk video that I saw a year ago that hacked Wii remotes to produce cheap, low-cost versions of interactive hardware discussed in this study. It prompts the thought of cost and availability - what if we can make technologies in this study as widespread and cost-effective as the Wii remote? Strip away the stigma we have around the remote in the way we hold and use it, and we have a functional input device with sense of space, direction, and speed. If we take these already functioning input devices and flip the mental model on its head to implement more ubitiquous ways to utilize Wii remotes, I would be interested to see how users respond at scale.
  • I like that this study makes much more of a concerted effort to emphasize how it fits within the realm of ubitiquous computing. It's clear what ways features of the AmbientROOM "fade into the background" of ubitiquous computing. Though inconsistencies should be noted (e.g. "sounds of rain could be distracting"), the paper makes consistent observations within the realm of ubitiquos computing. The depths that the researchers go to to find appropriate metaphors to bridge the physical and digital worlds are exactly the channels that provide depth to ubitiquos computing.
  • I wanted to dig deeper on the point that the paper makes on "the ways ambient media can move naturally into the foreground." I find this statement somewhat conflicting to Mark Weiser's affirmation of the "invisible ideal." It's of my opinion that tangible items that prove to be too unfamiliar with the general public's mental models will inevitably cause an unnatural transition from the background to the foreground. Designing ideal bridges between the digital and physical, in my opinion, should keep the information in the passive background and not risk this unnatural transition.
  • Smart devices nowadays make more and more of an effort to blend into the background while constructing an ever more complex profile of the users they track. Given how widespread and inconspicuous these devices are, aren't they fitting into the paradigm of ubitiquos computing. I argue that these devices pose an effective counterargument to this paper's claim that tangible devices leads to a richer experience of the digital world. Devices don't necessarily need to be grasped or touched to provide a rich and even multisensory experience.

4/12/14:

Activity Sensing in the Wild [link]

  • This paper goes a long way to test user interaction in a new frame of mind that I assume the majority of users aren't used to. The paper describes of the commercial products with embedded sensors at the time, and it's clear that the majority of the products were not attempting to bridge the physical and mobile worlds very much at all. Linking all of the phone's infrastructure to the hardware sensors for 12 different prototypes was probably quite time-consuming, but it's clear that the focus of this paper is on the interface and the interactions that people used to monitor their fitness.
  • That being said, my main contintion of this paper was the lack of generalizability. I was under the impressiona that the sample size was too small and the responses too subjective to yield much insight into other interfaces particularly those without such sensors. It remains important to the focus of parhaps a product development team at car company, but not for research.
  • Another contention with this article involved the heavy use of "manually entered activities." This detail makes it hard to accurately gauge the use of the sensors when activities like cycling and other common physical activities such as sports. When the point of this system essentially reduces to a text-based entry, it's hard to understand the major insights that the interface claims for the sensor-focused interactivity.
  • Within the context of ubiquitous computing, early prototypes of today's wearable devices must have pushed the boundaries of what was possible. It's interesting to see however the main course of these fitness oriented devices turn towards smaller, wearable devices rather than the phone itself. I wonder if today's market would really take to the UbiFit system if it was incorporated into today's smartphone as a standalone device. Would we really react so positively or find it foreign in comparison to the other applications we use our phones for?

HydroSense: link

  • This study provides an interesting, early view of the perception of "smarthomes" that today's Google Nest is pushing towards. The important distinction in this paper, however, is its emphasis in "single-point sensing" whereas the smarthomes of today definitely integrate multiple repository of data collection to form a cohesive story. I liked that the study was able to exhibit such accuracy to the true value of the amount of water used.
  • Much of the discussion involved analysis of the types of valve detection that felt irrelevant to the true nature of the paper. In my opinion, I thought the novel contribution to the conversation here was the way a user might react to such information of having these accurate measurements of water usage. I would have liked to see much less focus on the analysis of how the flow and the amount of water was calculated and much more discussion on the implications of such sensors and what directions this work could take in terms of integration into the home.
  • I liked the level of granuality that the paper went into in discussing what the system did not do. The point that the authors made in discussing "partially-open valves" definitely proves relevant to the everyday user (like myself!) Furthermore the discussion of overlapping events arises a common everyday use of the bathroom when several devices are active at one time. This gives a good reflection of what the system is actually able to measure in the typical setting of water usage in a house.
  • I would have liked to see an additional study that extended upon this work to discover effective ways to present the information that Hydrosense gathers back to the user. Given the Nest interfaces of today, it's clear that an effective smarthome involves a good deal of thought in designing an effective yet elegant interface that literally lives with the user. If we were to take this study one step further and build out ways to actually control water usage remotely - how might that effect our water usage? What implications might that have on our drought?

4/7/14: Design + Creation

Getting the Right Design and the Design Right [link]

  • This paper was an interesting, almost psychological study, on the nature of how diversity of perspective affects the perspective itself. I found the conclusions applicable to needfinding and prototyping research and an argument for a re-evaluation of the design-thinking process which seems to be focused on iterating on a single or main development track of a project. I think that the study was well-done in terms of the scope and the methods that were used to assess the hypothesis. It's nice to see paper prototyping as a vehicle to speed not only product development but to hasten research costs.
  • I was personally surprised by the contention to the last hypothesis as I initially agreed with the claim that creativity would be more likely in the scenario with multiple designs. I think that the conclusions that the study came to surrounding making suggestions makes sense, though I am skeptical of whether participants simply felt less primed to suggest chances for fear of being 'too unoriginal' as they're surrounded by multiple renditions of a single design. If we imagine the setting of a discussion setting filled with students of different perspectives, I know from personal experience how I feel better equipped to think of discussion points via synthesis and reimagination of what I've already heard. However, despite being better equipped, there is a higher fear of making an "unsignificant" or "unoriginal" remark based on the diversity of opinions around me.
  • The discussion about choice and decisions reminds me of an interesting point tied to choice paralysis as made popular by Barry Schwartz in "The Paradox of Choice" (and his TED talk). I wonder if these findings would hold after extrapolating the experiment to include more choices. I would expect that as the number of choices increases, behavior will be grow to be similar to that of the analysis of a single design. Interestingly enough, design thinking in general seems to push for quantity of ideas over quality - why doesn't this sort of process seem to suffer from choice paralysis?
  • Though this paper focused mainly on designs for user interfaces, I wonder if findings are relevant and applicable to other forms of design in the frameworks of art, musical composition, or even construction blueprints! What if we have experts or amateurs in the subjects take a similar approach to their design efforts? Would findings indicate a similar trend across the severity and variety of critiques? I would expect differences in subject areas like these to not show significant trends given the complexity of the critiques in these design settings.

Webzeitgeist: Design Mining the Web [link]

  • I appreciate this paper's attention to isolate and focus on a specific research question, that of: "What can we learn from mining design from the web?" The insights here seem well abstracted from the noise of the web given the way that models were defined in terms of position, width, even to the granuality of the number of children elements. The definition of "vision" was particularly interesting, given that it focused on the most common color in the element and particularly paid special attention to edge pixels. Unfortunately, the paper didn't discuss major findings related to these more interesting features of "vision" which I feel is an integral part of design.
  • The paper made a solid effort in breaking down the components of "design" into quantifiable and easily measurable qualities. However, I wish it explored more complex relations between different visual elements themselves. Specifically, how does the visual interaction between different visual elements on a webpage enhance or detract from the holistic design value of a page? Naive approaches towards this question might include overlap or dx or dy features in relation to the parent or sibling elements, and allowing standard learning algorithms like SARSA to try to predict important features.
  • I personally enjoyed the amount of technical detail that the paper went into. I can see how others might argue that such attention to technical detail may be superfluous or irrelevant to the exploration of design principles. However, the type of mining that the authors attempted is definitely novel and clearly worthy of extensive discussion alone. In particular, the creation of Design Query Language provided some fascinating insights in how design features can be organized and summarized.
  • I would have also liked to see an analysis of popular pages - and see if correlation and design mining analysis across popular sites with heavy traffic can lead to the prediction of particular design features that foster "well-designed" features. Of course this venture would require a careful investigation in whether a "popular" site is comprable to a "well-designed" one. However, the investigation raises an interesting inquiry into whether good design conventions correlate with statistically signficant design features according to the design mining techniques presented in this paper.