Welcome to the IdeaMatt blog!

My rebooted blog on tech, creative ideas, digital citizenship, and life as an experiment.

Saturday
Aug292015

Proposal: An online skeptical toolbox

For some time I've been collecting project ideas to help my overall goal of "computing in the service of humanity" (a phrase I recently picked up). While I definitely want to find consulting work in this area, starting up a personal project in the meantime is important to me. Out of a bunch of ideas I've decided to start with an online skeptical toolbox.

What finally kicked me in the butt were two things: 1) The publishing of Google's paper Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources (which got a good bit of coverage), and 2) the recent Committee for Skeptical Inquiry article Online Tools for Skeptical Fact Checking by Tamar Wilner. To that end I've created a GitHub project that at the moment has two files: the proposal itself and a somewhat-organized XMind mind map file skeptical-toolbox.xmind that lists some detail.

As I said in the Implementation section, the tools cover a range of complexity, which affords our quickly rolling out something useful using the simpler ones, and progressively introduce more sophisticated tools as the toolbox develops. I plan on using Python + Flask to write it in (same as I used for PeepWeather), but I'm quite open to other languages and frameworks, such as Ruby on Rails if someone steps up and convinces me. One thing in Python's favor is the popular Natural Language Toolkit with its information extraction tools, especially around entity recogition.

What do you think? It is very early days (day zero, I suppose) but I hope that sharing this will generate some thoughts. If you're interested in helping build this - awesome! Just comment here or send me a line.

(Image: Memory Belt)

Tuesday
Aug112015

Available! Versatile Software Engineer, 20+ Years Experience, Object-Oriented & Extreme Programming

After taking a break from computing in 2009 and creating a new kind of social platform - one based on treating life as an experiment - I happily returned to CS research in 2011. I was asked to re-join the AI group at UMass where I previously helped build Proximity, a platform for machine learning research. However, that position's funding has ended (as is the way of research's ebb and flow) and I'm excited to move on to the next adventure.

In those four years I got a lot done, including exploring ways to scale up the Ph.D. students' algorithms [1], coaching students in the Extreme Programming agile software development methodology that I love so much, implementing lab infrastructure improvements (I consolidated multiple aging servers into a single modern cluster, and got the lab using the commercial wiki, Confluence), and of course collaboratively designing and coding the lab's new causal learning in Python (a language I love, BTW, though I still enjoy Java).

As a pleasant surprise, before the funding ended in May, at the last minute another CS professor asked me to help his lab get some urgent work done during the summer, which I was happy to do. While there I benchmarked and started an optimization effort for their new information extraction pipeline, led the team in meeting a crucial grant deadline, and prototyped a web-based PDF annotation tool [2] for creating a gold standard to evaluate their algorithms' performance against. I was introduced to Node.js, continued learning Scala (I wrote some for my GraphX work), and JavaScript for the web app.

Looking forward, I've started my job search for my next project, with the goal of finding work that's challenging, engaging, and meaningful. I've not had to do one in a while, so the process itself is a kind of experiment. What approaches will work? What is out there? Who is doing cool work? Exciting, and a little scary.

So: if you know someone looking for a versatile software engineer with lots of experience who writes excellent code, then please - drop me a line. My LinkedIn profile is linkedin.com/in/matthewcornell.

  • [1] I explored multiple MPP approaches including single-node SQL using PostgreSQL, distributed SQL using Impala on the Hadoop ecosystem and Vertica, and graph databases such as GraphX, Neo4J, and Giraph.
  • [2] The prototype allows users to load a PDF file as a browser-renderable SVG, draw and edit rectangular regions overlaid on the text, and save and load them from a server. I used the standard web technologies of JavaScript and JSON, all running on Play Framework and its Java API. I used Fabric.js for the direct manipulation UI, which I really enjoyed working with (check out the awesome demos).

(Image from Fountain on Boston Common.)

Wednesday
Apr222015

Smith College Guest Lecture, "How to build a simple production web site in Python"

Last week I had a fun gig giving a one-hour guest lecture for Smith College's freshman Intro Computer Science through Programming class. The topic was "How to build a simple production web site in Python" in which I was invited to share the process and technology that went into building PeepWeather.com . The rationale was that it would give the students a taste of what is possible to create using some relatively straightforward Python frameworks and libraries, hopefully providing some motivation for those wondering what good their current and (necessarily) bounded assignments are.

I had a great time meeting the challenge of putting together a presentation that had breadth and was motivating, but at the same time understandable to the audience (with a the right amount of stretching to keep them engaged). It went well, with some excellent questions during and after the talk. Here I want to share a little about the concepts and technology I covered in case you're considering giving a similar presentation of your own site. For the curious, here's the PowerPoint as a PDF (unfortunately the video did not work out).

Technologies

I named these technologies, describing each pretty briefly, except for diving deeper into Flask:

Concepts

I covered the following at various levels of detail:

The Code

I jumped into the code at the appropriate points, folding as needed so I didn't overwhelm. I stuck to high level basics, highlighting the concepts they currently know including classes and methods. Mainly the focus was on how the concepts got implemented in Python, but no significant code detail; there simply wasn't time. (Note that IntelliJ IDEA's presentation mode worked great for the projector.)

Motivation

I tried to make two points to get the students excited. First, they are programmers and so they have a powerful skill to create web sites! I suggested they pay attention to any thoughts that come up like "Wouldn't it be great if you could __?" or "Why isn't there a site to __?" Second, I pointed out that putting together even a small site is a fantastic portfolio piece that impresses prospective employers. It shows that they can have ideas and, better yet, bring them into reality on their own. I wrapped up by saying that I had a ton of fun writing PeepWeather and learning the various technologies. It's extremely gratifying to bring something into reality (if that's the right word for something as virtual as a web site :-)

Overall I enjoyed both the talk's preparation and presentation, and I hope to give it at the other Five Colleges.

Friday
Mar272015

Loving the wonderful book Code Complete 2

After two false starts [1], I am working my way through Steve McConnell's book Code Complete: A Practical Handbook of Software Construction, Second Edition (links: author, book , amazon). He describes the topic as "an extended description of the programming process for creating classes and routines." Extended is right!

There are very good reasons why it is in the top 10 in software design at Amazon: After 1/2 way through, I've found the book is unique and very well-written. (I am not alone in this assessment [2].)

It is unique because it goes into a level of detail about coding that is specific and really deep. McConnell has given incredibly clear thought into the minutae of programming, and then shared it with fine writing. My boss chucked when I showed him that there are four chapters (table of contents here) on variables, one dedicated solely to naming them. Awesome! I've been writing software for a long time, and it was gratifying to find that McConnell has put into words things learned intuitively, such as Think about efficiency:

... it's usually a waste of effort to work on efficiency at the level of individual routines. The big optimizations come from refining the high-level design, not the individual routines. You generally use micro-optimizations only when the high-level design turns out not to support the system's performance goals, and you won't know that until the whole program is done. Don't waste time scraping for incremental improvements until you know they're needed.

But beyond being reminded of these (and there are many, many of them, for example: "Initialize each variable close to where it's first used"), a big payoff is the new-to-me ones, of which there are many. The book is simply too detailed to summarize the main points, so I'll just share some tidbits that jumped out at me:

  • "Upstream Prerequisites" ("programmers are at the end of the food chain. The architect consumes the requirements, the designer consumes the architecture, and the coder consumes the design.") His point: "the overarching goal of preparation is risk reduction. By far the most common project risks in software development are poor requirements and poor project planning." (Note: I found the StackOverflow question Software design vs. software architecture answered terminology confusion I had about those terms: "I think we should use the following rule to determine when we talk about Design vs. Architecture: If the elements of a software picture you created can be mapped one to one to a programming language syntactical construction, then is Design, if not is Architecture.")
  • Programming IN a language (limit thoughts to constructs the language directly supports) vs. INTO a language (decide what thoughts you want to express then determine how to do so using the language's tools).
  • "Wicked problems" (like software design): can be clearly defined only by solving it or solving part of it.
  • "Design is a sloppy process": is about tradeoffs and priorities; involves restrictions; is nondeterministic; is a heuristic process; is emergent.
  • Software's Primary Technical Imperative: the importance of managing complexity by dividing a system into subsystems. "The goal of all software-design techniques is to break a complicated problem into simple pieces. The more independent they are the better."
  • For the sake of controlling complexity, you should maintain a heavy bias against inheritance (and prefer containment): Inherit when you want the base class to control your interface. Contain when you want to control your interface.
  • Aim for loose coupling (the manner and degree of interdependence between software modules) and strong cohesion (the degree to which the elements of a module belong together).
  • Routine length (he generalizes procedures, functions, and methods as "routines"): Allow them to grow organically up to 100-200 lines (excluding comments & blank lines). (My routines tend to be smaller - I'll have to give this more thought.)
  • Routine parameters: pass <= ~7 of them. More implies too much coupling.
  • Design Practices - Capturing Your Design Work: code documentation, wiki, email summaries, camera (whiteboard) vs. a drawing tool, flip charts, CRC cards (I love 'em), UML diagrams.
  • The Pseudocode Programming Process: It was helpful to see this named and formalized. "Once the pseudocode is written, you build the code around it and the pseudocode turns into programming-language comments. This eliminates most commenting effort. If the pseudocode follows the guidelines, the comments will be complete and meaningful." Cool!

And many more. Bottom line: If you are serious and committed about improving your craft, read the book! How about you? What coding books have helped you be a better programmer?

[1] There were two reasons my first two attempts to read this hefty 900-pager failed. 1) I wasn't fully committed, and 2) I didn't have a concrete plan to read it. I resolved the former when I realized I'd been focusing on my work projects to the exclusion of professional development [3] (other than reading a handful of blog posts a week) and needed to shake things up. The second block was simpler to address - break the book up into manageable chunks and commit to reading one every day. I summarized this generally in Reading Books The GTD Way, but in this case I did a back-of-the-napkin analysis, working backward from an estimated chapter velocity:

900 pages, 35 chapters

  • 1 chapter/hr -> 35 hours
  • 1 hour/workday, 3 workdays/week -> 3 hours/week
  • -> ~12 weeks (~2.8 months)

I've been recording my minutes/page progress, which ranges between 1 and 2 1/2. With an average chapter being about 30 pages, this works out to be about 60 minutes/page. Good estimate! (Actually, I'm readming more chapters than three per week, and at my current rate I will be done at the end of April.)

[2] A helpful review was Matt Grover's blog: Code Complete Review, which pointed me to Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin (Amazon link here). That book looks excellent too, though it apparently has less detail, weighing in at a svelte 500 pages.

[3] I was surprised to find few articles on the Web on the importance of professional development for software engineers, at least via a quick Google search. The main one was 10 Professional-Development Tips for Programmers. It is structured annoyingly as a slide show, meaning you have to click through to see the content (an SEO and advertising gambit?), so I've collected the titles for you:

  • Staying Current Requires Continuous Learning
  • Problem-Solving Skills
  • Communication and People Skills
  • Networking and Personal Branding
  • Code Documentation and Neatness
  • Master Naming Functions
  • Get Familiar With Agile
  • Get Familiar With a Native Mobile Platform
  • Project Management Skills
  • JavaScript, CSS and HTML5 Skills

Searching the two programming-specific sites I like (Hacker News and reddit.com/r/programming - what are your favorites, BTW?) was more fruitful. Here are a few:

Sunday
Mar222015

A few ideas around citizen involvement and government/media oversight

While going through old files, I dug up a few single-paragraph proposals from 2004 I wrote for a research lab I worked for at the time. They did not go forward, but I thought I'd share them with you. When I discovered them, I was surprised because I'd forgotten my interest in these areas has been around since then. Another surprise is that the algorithms and technical approaches to possibly implement some of these ideas are just now emerging from computer science research labs - [1] and [2], for example.

Right now I'm exploring ways to restart this interest, and I hope that posting these archived ideas here might jiggle something loose. Cheers!

References:

1. Increased Corporate Influence of the Federal Government

Specifically, using technology to enable citizens to monitor for abuses and actions that limit its effectiveness. The first example that comes to mind is the commercialization of legislation: Wealthy corporations and individuals who influence laws to their benefit [1]. The result is laws that ignore the greater good (which is usually in the longer term) in favor of shorter term creation of profits. What can our group do?  Let's apply our ideas to the job of examining this cycle of influence. It seems that the data are available and amenable to relational analysis. Is the problem challenging? I'm not sure; it depends on how clear the patterns are.

References:

2. Increased Bias in Consolidated Media Companies

As citizens I believe we are facing challenges to fundamental aspects of our society as a result the the recent creation of large media conglomerates that control a high percentage of mainstream media [1]. The problem here is that corporations are biasing the news in directions that favor profits over journalism. What can we do? One idea is to create a system that can indicate the bias of news stories. I imagine something like Google News that lists major news stories along with a bias indicator. The hope is that people reading their favorite writer on a topic will be able to get a summary of the story's bias. A final idea would be to look for stories that are *not* covered by mainstream media, i.e., an automated version of [2].

References:

3. Reduced Rigor in Major Media News Reports

Another problem related to 2) above is to address the current propensity of some writers to report facts as taken from second sources without investigating the original sources [1]. To solve this we might analyze stories written about a single news event in order to determine what sources they derived their information from. For example, do they all link to a single source, or are sources more diversified? Also consider the recent avalanche of stories (many misinformed) about DARPA's Total (now 'Terrorism') Information Awareness program. Could a software system using our research have been able to determine that a small number of influential stories was propagating around the net with little additional analysis performed by those referring to them? Finally, another idea is to create automated versions of some of the 'fact check' web sites currently available ([2], [3]).

References: