Testing a “Donate” Button

March 8, 2012

Today I released a new “little app” to compliment the growing suite of GraphEdge tools: RecenTweet filters your Twitter feed by limiting the amount of content you see from the most “chatty” of the accounts you follow.

On this light-weight app I’m testing out a monetization feature that has long fascinated me, but which I’ve never actually tried: the “Donate” button. I’ve been informally polling my Dogpatch colleagues for their thoughts/predictions on how it will perform. I plan to track this very carefully and will be blogging metrics on its performance. Please leave a comment to register your own prediction on how the button will perform in the context of this app, or to share your experience with “donate” strategies in Web properties.

Disruption

February 6, 2012

Just got around to watching the Apple keynote where they introduced Author and the new iBooks and iTunes U. Did anyone else detect some slight panic in the audience’s reception of these? There were oohs and ahs, as in any Apple keynote, but there were tons of publishing industry insiders there and if they weren’t panicked, they should have been. Education is an industry that that needs disruption, in general, and not just in how they deliver text books or organize classes.

Within the past twelve months I’ve had discussions with a publishing company who, through a focused internal team, intends to introduce tools to improve college readiness. Not sure they’re taking the right approach… doing anything even slightly disruptive is tremendously difficult when attempting to do so from within one of the giant, incumbent organizations. But someone is going to crack that nut, and it might well be them. Whomever figures it out, I hope it happens soon. I look forward to that disruption with an interest more social (and parental!) than entrepreneurial. Sooner the better.

Validation & Accuracy

March 2, 2011

We had a problem recently. Our Campaign Tracking application experienced a bug that resulted in our failing to collect a handful of tweets that rightfully should have been found. Once the problem was identified, we were able to restore almost all the missing tweets. We estimate having “dropped” between 10 and 20 tweets from the campaign, which isn’t too many, but still isn’t perfect.

I was very upfront about the issue with the client whose campaign experienced the problem. They expressed their concern and asked me what I could do to give them more confidence in the numbers GraphEdge reports (because they report those numbers back to their clients).

As a result of this, I decided to implement a Quality Assurance program: we would pay for a competing application to periodically validate that our campaigns were capturing the complete set of tweets at the most basic level of complexity—a single-keyword campaign. My idea was that if we can be sure we’re capturing the full scope of those, then we can internally validate all the fancy features inside of the GraphEdge application (multiple keywords, user-panel restriction, negative keywords, etc).

It was  a great plan, but it relied on one assumption—that a perfect competing application is out there. So far I’ve tested 3 competitors (to remain unnamed!) but rather than validate our own application, all I’ve been able to do is invalidate theirs. So far each of the three I’ve tested has failed to find all the tweets GraphEdge has, and in one case the competitor routinely captured as few as 20% of the daily tweets GraphEdge did.

So it’s nice to not have invalidated our own application, but the problem still exists: how to validate it to satisfy customers? Given the apparent lack of accurate data, is there an “accuracy” sales/marketing angle I can take to close new business? How do other companies satisfy their clients that their data is accurate? Do they care!?

Went looking for answers, but just came up with more questions.

Conversation Discovery now in GraphEdge Pro

October 14, 2010

Now you can get a sense of how much buzz your campaign generates beyond just the results of keyword searches!

GraphEdge Pro now discovers tweets (entire threads) that are in reply to campaign-relevant tweets, regardless of whether the reply contains a campaign search-term.

These “conversation-tweets” and their authors are integrated into the standard Campaign reporting, and are also reported separately, with all the stats you’d expect to see from a GraphEdge report!

There’s a new “Conversations” tab you can use to view threads, hierarchically rendered. When you manually “exclude” a tweet as being irrelevant to a campaign, the entire part of the thread that is in response to that tweet is also excluded (or re-included, if you’re restoring a tweet).

We’re really excited about this one…. multiple clients are already making use of this new feature. It’s very cool!

Please reach out if you’re interested in adding Campaign Tracking to your existing GraphEdge report, or if you’re interested in learning more about becoming a GraphEdge client. I’m offering free trials of Campaign Tracking for a limited time, so hurry!

Brief Outage

September 7, 2010

We suffered a brief outage this morning/afternoon for GraphEdge Pro (http://agency.graphedge.com). No data was lost, and we’re back up and running. Please contact support at graphedge dot you-know-where with questions.

Followers’ Followers vs. The REAL 2nd Level Reach

June 15, 2010

A client and I are in the middle of a discussion about total second-level reach. The question is: how many total followers do my followers have? The answer is far less straight-forward than most analysts/agencies would have you believe.

A naive approach to this question would be to look at each follower and count the number of followers those users have. The problem with that approach is we double/triple/etc. count any individuals that are jointly followed by two/three/etc. of the followers. Many analysts will ignore this as being insignificant and just report the big number. Much easier. And completely wrong.

GraphEdge is less concerned with who is following a set of followers, and more concerned with who else the followers follow. That is, we look at the other people your followers follow, rather than who is following your followers (I know, it gets complicated!). However, from what we do calculate, I can tell you that it is not sufficient to take a total of followers’ followers and report that as second-level reach.

The reason is that it ignores “centrality” (for you academics). Imagine that me and ten of my friends are all following each other. My ten followers will each be following ten people. Add that up, and that’s 100 “connections”. But we know there are really only 10 unique individuals in that extended network (not counting me). That’s a 10% uniques-to-connections ratio.

Centrality tends to increase as the size of the network increases. So growing your network has marginally reduced effect on the size of the extended network.

To demonstrate this effect I took a random sample of 700 or so active GraphEdge accounts and looked at the number of followers, the number of connections to followers’ friends, and the number of unique names the followers were following. Note: for accounts with more than 10,000 followers, we generally take a sample of the followers, rather than looking at each follower’s network, so to keep the numbers clean I limited my sample of 700 accounts to those who had less than 10,000 followers.

This chart shows the total connections (blue dots), given a number of followers (on the X axis), and the number of unique individuals (red dots) in the set of followers’ friends. Notice how the number of unique individuals grows at a much slower rate than total connections, as the number of followers increases.

For example, let’s look at that first pairing at around 9K followers. The blue dot (the highest in the chart) says that those 9,000 or so followers had a total number of connections around 8M (close to 900 “friends” per follower). However, from the red dot directly below it, we can see that there were only 2M unique people in that list of 8M connections (more like 250 unique names per follower).

And this next chart breaks it down by the ratio of uniques-to-connections, given the number of followers.

Our friend with the 9,000 followers had a unique-names count just 25% or so of the total number of connections.

So if someone asks for a summed-total of Twitter followers’ followers: go ahead and present the number, but know (better yet, explain) that the real reach may well be just 30% of the number reported… or less! And if you need to calculate either one of them, let me know, I can help you there!

Too Many Notes

May 23, 2010

Was bound to happen… by drawing every available data point into our graphs we’ve exceeded the maximum length of a URL, so charts for certain very-long-term accounts aren’t drawing. The fix is simple in concept but less straight-forward than one might think. It’ll give us something to work on tomorrow!

Beta Testing Agency Tools

April 2, 2010

Beta testing some new tools for the Agency version of GraphEdge. If you’re planning on running a campaign on Twitter in the next week or so, drop me a note, I’d love to have GraphEdge track it for you, free of charge.

Recent bug and fix

April 2, 2010

So I’ve been so focused on our agency-related products, that I missed a pretty big bug in our public-facing site.

The issue was that our updates weren’t being sent if people didn’t explicitly set their update-frequency in their account settings. Then I fixed the bug a couple of days ago. This had an unfortunate side-effect for a certain group of recent Free Trial users: if they signed up more than 14 days before the bug was fixed, their free trial expired before they received their first update. So they’d click the links in their @messages and the report page tells them they’re expired, they’ll have to pay. That sucks!

This has led to confusion and, in some cases, mistrust. I can understand that. If you’ve encountered this bug, I’m very sorry for the inconvenience. I’m trying to find a solution that’ll let these ‘limbo’ Free Trials continue. Stay tuned.

Good but Not Good Enough

March 14, 2010

I’ve had a survey.io survey available to some repeat customers for a short time. Results are good (even though the sample size is small), but not where I want the product to be.

The key question in a survey.io survey i: “How would you feel if you could no longer use [product name]?”

Current results for GraphEdge are:

  • Very disappointed: 15.4%
  • Somewhat disappointed: 76.9%
  • Not disappointed (it really isn’t that useful): 0%
  • N/A – I no longer use GraphEdge: 7.7%

For Lean Startup/Customer Development gurus, this one question is the main metric for product-market fit in young products, much preferred over it’s ugly cousin, the misleading “Net Promoter Score”.

Marketing guru Sean Ellis visited Dogpatch Cambridge back in December and told us he won’t even start marketing a product until it has 25%-40% “very disappointed”. So while these scores look pretty good, it’s not really where I want the product yet.

Of course, this is a poll of the public use of GraphEdge, not of my PR- and marketing agency partners, who are overwhelmingly positive. So I’m not terribly disappointed to have a “would be very disappointed” score of under 30% from the general public… but I’d rather it be better!

Regarding Net Promoter Score: question 5 asks: “Have you ever recommended [product name] to others?” In this question, GraphEdge scores a whopping 77%. Viewed on its own, this would perhaps give a too-rosy picture of the product’s fit to market. Combined with my “how disappointed” score, however, this tells me that I’m doing better than the competition, but failing to delight. Going to work on that.


Follow

Get every new post delivered to your Inbox.