Tag: sketchbook

Honorary Street Names

I’m really interested in informal, immersive learning experiences.

I’m particularly interested in ways we can leverage cities, and the experience of living in and moving through one to enrich ourselves. We, as human sponges, are soaking up all kinds of information from our daily experiences, whether we intend to or not. Why not design those everyday experiences to expose us to things that might stretch our minds a little bit? or help us connect a little better with the space and people around us?


There are many streets in New York City with honorary names, in addition to their formal names. I’m always a little disappointed when I see a name on a street sign that I don’t recognize. In this age of handheld, internet-connected computers, I can obviously quickly look up the name, but it always seems like a missed opportunity not to have information on or near the street signs about the honoree.

And even if I look the person up, I still lose contextual information. I may find out why the person was significant to the world, broadly. But I don’t know what spurred people here, in this physical place, to co-name a street after that person. And I don’t know when that happened.

And in many cases, the person had local significance more than worldwide fame. Take Abrian Gonzalez Place, in Brooklyn. It’s hard to find much in a quick Google search about Abrian. And in fact, Google originally thought that perhaps I’d misspelled the name, and so showed me results for “Adrian Gonzalez,” as well, which wasn’t helpful.

Gilbert Tauber created a database of (most of) the NYC streets with honorary names, and information we have about their namings and namesakes. You can read more about the origin of the database over at the NY Times. Tauber’s database says only, “Abrian Gonzalez (1985-2006) attended P.S. 15, as a youth. He inspired the other students to stay away from crime and drugs”

Abrian clearly had significance to people in his community. It’d be great from a memorial standpoint, to have even that little blurb at the base of the signpost of Abrian Gonzalez Place.

And having that information somewhere in the actual physical location would help target learners (you know, passers-by) situate their learning.

At its most simple level, this could be just simply posting text. But it would be great if it could grow beyond that, into something more dynamic.

How might we help people currently, physically on a given co-named street have an active, living engagement with a community choice and context that may be many years old?

I don’t have the answer to that, but here’s my first brief pass: Maybe, along with the placard, there’s a QR code, and scanning the QR code takes you to a page on a site not unlike Place Matters or City of Memories, and there’s a page specifically for that Abrian Gonzalez Place. And there, you can see pictures people have taken near there, or read a story someone has posted about Abrian and his life, or pictures of him. The page talks more in-depth about who was involved in getting the street co-named, and their motivations.


These are some ideas/associations that are inspiring me:



Next steps:

  • define the target problem/objective more specifically
  • do more rigorous ideation on a solution
  • get a prototype up and running and test it out in various locations


Potential issues:

  • as with any public collection of data, people might submit factually incorrect, misleading or irrelevant material
    • read more about public history work and how projects like those above deal with these kinds of issues

Dynamic Vocabulary Support for Videos

I was watching a YouTube video this afternoon (on Advanced Password Recovery, if you’re interested). But basically, there were a number of technical terms in the video that I was not familiar with.

I was only half watching this particular video, so I wasn’t inclined to pause and take notes to reinforce and research later, as I often do if I’m more actively watching something.

But I thought it’d be great if some of those technical terms showed up on-screen, because I’m a pretty visual learner, and seeing new words helps me retain them a lot better than just hearing them does. (Of course, this is also just generally good learning design- it helps learners to expose them to target content in different ways]

However, that would take a lot of commitment on the part of show creators, post-production, to decide which words they wanted to highlight, and add the text in at the appropriate moment in the video.


So, what I propose is an app/website that scans the caption track for low frequency words in the given language. Then it superimposes the text of those words over the video at the time the word is spoken, and leaves them on the screen for, say, 3-4 seconds.


The technology for this certainly exists.

YouTube has automatic captioning now (and obviously, natural language processing technology in general is improving daily), and the API allows retrieval of the captions.

Then you’d need a database of word frequencies. Something like this from the Corpus of Contemporary American English.

Ideally, this would be a dynamic database, so that it could adapt to shifts in language (e.g., new slang). And I’m sure Google (if anyone’s out there listening) has their own interesting, robust, dynamic language data that, if they took this on themselves, would be incredibly helpful in developing this tool.



Originally (…an hour ago), I’d imagined this as a tool for video creators, but as I was thinking about it more, it’d be a great tool for viewers instead/in addition. Then the viewer doesn’t need to be dependent upon a creator making their video more learning-oriented in order to reap the benefits of it. The viewer could choose to watch a video via this tool, to help them learn the content better.


Additional features (incomplete):

  • Vocabulary text size, color and font are adjustable
  • Length of time word stays on the screen is adjustable (default: ~3 seconds)
  • Words by default are displayed in upper left corner of video screen, and stack below each other if multiple uncommon words are spoken in a row



  • People like me, who are watching a video to learn something from it and/or are obsessive note takers
  • People learning a new language (e.g., watching videos in the new language)
  • People with language difficulties in their native language
  • People with language processing disorders
  • ?


Addressing potential issues:

  • Obscuring video content: vocabulary text may obscure visual content in video
    • could make this “smarter,” and use facial recognition to avoid placing text over faces in videos, and OCR ish (“optical character recognition”- it visually identifies letters/text) technology to avoid obscuring existing text in the video, but you’d want to have it still default to the same position, for consistency for the learner
      • even avoiding text and faces, of course, the vocabulary text could still obscure important visual content
  • Accuracy of captions: this design counts on video creators using captions, which many do not, and even *if* they’ve added automatic captioning, it counts on someone double checking that the captions are correct
    • This doesn’t fix the issue, but in general, anyone posting a video with spoken words should include captions, at least for accessibility’s sake. And YouTube (where many videos are posted) makes it incredibly easy to at least have rough captions for your video. We need to work, in general, as a whole, on designing content and tools to be accessible for everyone in our society.
  • Word frequency: it’d likely be an ongoing process to determine how rare a word should be in order to be highlighted
    • maybe the viewers could adjust the level of obscurity before viewing a video
    • a way for viewers to flag words that they didn’t need highlighted (they already knew), and build up a personal database that could be used to inform the highlighting of future videos they view via this app
  • Single words vs. phrases: this design is built upon frequency of single words, but some technical jargon uses relatively common words in uncommon combinations, or uses common words with a less common meaning that’s specific to the field
    • this might require a phrase frequency database, as well, which may or may not be as available (I’m not going to research it right now)
    • tags on videos might help narrow the scope of word frequency, by helping to identify the general topic/field of the video content
  • Vulgarities: highlighting vulgar words
    • these could be blocked from being highlighted when the tool analyzes the captions