Well that just about does it for the day. Time to go home and recharge the batteries for tomorrow’s sessions. Have a great night!
Ironically, it’s usually consumption that outpaces production … but that seems to have reversed in the Internet age at least.
If the web is growing faster than the leading search indexer (Google) can index it, what does that mean for the future of data? The real-time web will only make the content on the web grow faster
. Can consumption keep up?
How contextual should push notifications be? The guy asking the question makes a good point. I don’t care about every new follower I get on Twitter, but there are some followers I do
care about – like when a celebrity or someone with a larger network influence starts paying attention. I don’t need push notifications every time a dollar or two comes in or out of my bank account, but if there’s a huge, atypical withdrawal (or deposit?) I want to know pretty quickly. Context
is what frames the value of the immediate notification.
If you want people to find your app using Google, then (right now, at least) you almost need to build two versions of your app – one for the user, one in HTML for the crawlers. I thought that was the point of the hash-bang API Google and Twitter have been experimenting with. It at least warrants some further research.
If I need to go to your app to get value out of it, there’s a problem. I’m busy enough, the content and value should come to me. The real-time web isn’t so much about making the web faster
, it’s about making it more convenient. Updates and info come to me as I need them, they don’t sit on some static site and wait for me to come looking for the update.
Panel discussions work best with 3 experts. Maybe 2, but no more than 4. The moderator just summed up the 8-person panel perfectly: “We’ve had 8 people talk about themselves and the future now for 20 minutes …” Makes it hard to get into real content and discussions with a 40-minute time limit.
“Some apps are going to be real-time and it’s going to freak people out.” I definitely agree with that.
“Communicating with super-low latencies will give us some really cool apps.” I agree, but a lot of that still depends on technology. A lot of people still don’t have high-speed Internet, smart phones, or computers capable of doing even half of what we’re doing …
“We go to a conference to meet people.” I like the idea … but really, this conference has had such a tight schedule I haven’t had the time
to meet many people. And the few I’ve had the chance to meet were kind of floating around in a clique … I got first names, quick handshakes, then they started talking about events from the previous weeks and started catching up on “in” things from their group. Not so much “meet people” there … so sorry, I disagree.
BankSimple will eventually be exposing a developer API? Awesome!
The real time web, in the banking world at least, has been around for a long time on the side of the banks. But the future of the real time web is putting it into the hands of the consumer. Thanks for the innovations, Alex Payne
Eight panelists … I definitely do
want to know what they all do … but with only 40 minutes for the panel I think these introductions will take way too long. Be brief people, please.
And now, the panel begins.
Today’s final panel will feature Mikeal Rogers, Alex Payne, Leah Culver, Julien Genestoux, Nathan Fritz, Jack Moffitt, Jeff Lindsay, and Chris Blizzard. Should be informative, useful, and educational!
10-15 minute break, then the panel I’ve been waiting for all day! Time to stretch …
… and if anyone builds something based on that idea, I want a copy. And a short “inspired by” attribution buried somewhere in a readme … nothing big, just something I can show off while waiting for the race to start …
I definitely think a geolocation feature for a marathon would be a good seller. ”Track me as I run this race.” I’d love to post an interactive map of a race, let people see where I am at any point in time, link together with a few web cams so they can watch me as I pass through check points. That’d be very cool.
A geolocation game played with cars … hmm …
So what’s the difference between a geolocation game and an augmented reality game? I think it’s all in the UI, and that’s why it’s effective. With a geo game, you use the real world as the UI and the phone/device is just an added input. With AR, you use the phone/device as the UI and the real world is an added input. It’s all about affecting common behavior. If AR really wants to take off, I think it needs to start
in geolocation and evolve with the technology.
A physical drive with 6 GB/s read and 4.4 GB/s write speeds? Holy crap. Even at 0k for a drive, that’s really incredible. I don’t think I’ve even touched something that needs that kind of speed.
I like the slide title: “Examples of doing it wrong.” Ironically, just about everything on this list came up last week when I had a performance problem with my app. I’m glad I was able to solve it without falling into any of these traps.
The most relevant data should stay outside of the slowest point of the application. If MySQL (or the persistent store) is the slowest point, then cache data in memory rather than reading/writing from the database frequently.
Wow … that was just the overview
? Intense …
The “Reactor pattern” is a way to resolve the blocking IO (scalability) issue, and most programming languages have one.
“Ruby doesn’t scale well” is a myth. The scalability issue is actually common to many programming languages, not just Ruby.
The trick to async with MySQL is message queues … put that on my to-do list for research, since I’m not 100% sure how to accomplish that.
The basic stack includes Redis, Node.JS, Socket.IO, and the web application itself. He’s going to give us more details … and now I understand Redis a bit more. I think a Pub/Sub feature for a key/value store is really powerful and will probably use it to power the subscription engine of SwiftStream as I continue to build it out …
They originally built the system using Apache and discovered the hard way that it wasn’t asynchronous when 7,000 score updates were queued on the server and ate a lot of RAM. I’m so glad people warned me about that before I started doing async work … yay for Nginx!
The video of the graphics and pings on the map was pretty cool.
GeoLoqi sounds like a fun team to work with. Having your job be making a game as an experiment just to see if it works? Awesome!
Kyle Drake and Building MapAttack. I know web-based games are pretty cool, but a web-based game using real-world geolocation … it’s very similar to the augmented reality stuff my friends were trying to get me into last year. Should be exciting!
Ironically, I just saw a presentation on Thoonk and when I tried to load their website, it crashed because the servers are over capacity. Maybe it wouldn’t be the best choice for a scalable web app? Just kidding, I’ll still give it a look …
Some of the resources Henrik exposed us to: &! (his app), Thoonk.js, Backbone.js, and Capsule.js. I still have no idea what Thoonk and Capsule do, but I’ll be looking into them. And into Redis. They could all
be hugely useful.
Those last pieces of advice were courtesy of Nathan Fritz
, who will be speaking tomorrow.
“If your app fits Redis, use it. If it only kind of fits, don’t touch it.” – Great advice and reminds you to always choose the right tool for the job.
Redis can keep the entire app in RAM for 10s of thousands of users. That alone is impressive.
Why Redis? Basically because it’s “fast as hell” … or because it’s a “honey badger” … um … sure …
Wow … this session is very much like drinking from a firehose. Capsule.JS is the latest framework we’ve been told about, and I still barely understand what any of these systems do. Kind of like a here-are-50-tools-pick-one presentation. I’m not knocking it at all, it’s just a lot
of information to absorb and retain. I’m sure I’m forgetting far more than I’m retaining right now.
, an open source key/value store, is a great way to share memory between processes and languages. It’s pretty scalable, too.
Sharing data models between the client and the server is great for quick prototyping, but sharing that memory state isn’t very secure or scalable.
Case Study #3: andbang (&!)
, which just launched today!
Case study 2: Recon Dynamics
Parsing XMPP in the browser is a pain. In my experience, parsing just about anything
in the browser is a pain, so you want to be sure the data is ready before you send it. Particularly if you’re concerned about cross-browser performance (I’m looking at you IE) …
Ick. It’s built in Django. I’m not really a fan of using Python and Python-related frameworks for web apps … don’t ask me for a solid argument why, just a lot of bad experiences …
First case study: Frontdesk.im
Real-time, real-life Pac-man after the conference? Cool!
The next session, presented by Henrik Joreteg, is about building 3 single-page apps 6 different ways. When I first read that description, it sounded very much like reinventing the wheel … but I’m intrigued nonetheless.
A lot of systems are turning things off by default now … is that the right plan of action? I don’t necessarily think so. A lot of users don’t understand how to turn things back on, so disabling features by default in favor of security is crippling users in my opinion.
So where are websockets today in terms of security? Adam feels comfortable with them, but doesn’t consider himself a websocket expert.
Who’s responsibility is it to do secure code? Is it a requirement of the framework/language? Or of the developer using the tools?
“If you happened to fall asleep …” Sorry, guilty. But that was a really technical discussion of the issues in the community with a very limited introduction and not much room to breathe.
Adam’s goals for the community:
- Secure by default
- Better examples – documentation that doesn’t suck
The challenge is that we have a lot of developers who don’t really understand a lot of the security concerns that come along with development on the client. They’re typically server-side developers who are now writing libraries for use on the client side … but they haven’t been coming from the client perspective.
“I might make a few people upset by this talk” … Now I really want to know why.
“Old Problems, New Tools” – Adam Baldwin
Next up, a presentation from Adam Baldwin, the co-founder of nGenuity.
Is there binary support for websockets yet? It used to be stream based, now it’s packet based … but the best answer is “I think so?”
Debugging long-lived applications in the browser? Mozilla is building out a pretty advanced memory management tool. They’re breaking out DOM, content, layout, style, etc.
Applications don’t need to live in the cloud, they can live in the browser and interact with other browsers, the cloud, or other applications.
It also allows for direct peer-to-peer data transfer. That’s a lot better than routing data through a 3rd party server. A peer-to-peer connection would be faster and, from a privacy standpoint, a bit more secure than a peer-to-server-to-peer connection.
WebRTC is a direct audio/video connection between browsers. It’s not run through a 3rd party server to reduce latency. I think it’s a fantastic idea, and what I referred to at one point as “Skype in the browser.” Apparently Mozilla and Google are collaborating on it. Awesome!
Exposing device APIs and providing access to lower-level functionality of the machine to the browser and the browser’s applications are important.
Applications and web pages are different things. You install an application … and the mental associations that come along with that instill a sense of ownership. When you use a web page, you merely visit
the web page. It’s easier to offer subscriptions and have a pricing model for an installable application than a website, even if they run on the same platform and present the same experience.
You can build web-based applications that run in the browser but which aren’t
server-based applications. This is a hugely powerful concept.
Google as an alien face-sucking monster … interesting analogy …
“Data in the cloud is the new proprietary source code.” Data is being locked in because it’s stored on a proprietary system.
Websockets are nice because there’s not much overhead added to the request.
HTTP has been evolving from long, static requests, to asynchronous requests for chunks of HTML via XHR (XMLHttpRequest), to AJAX long polling, to websockets.
There are technologies that we’re going to be building into browsers that will change the way the world builds web applications.
Considering the cool stuff Mozilla has been doing with Firefox lately, this should be a pretty powerful presentation.
Next up, Christopher Blizzard from Mozilla … talking about “Real Time in the Browser”
No idea what the next presentations are about … but I’ll be sticking in Track A for the next few speakers as well. Their resumes are compelling enough to promise something interesting, so I thought I’d gamble and stick it out here.
And now I hear about JSON-RPC … I need to see how that’s different from dNode, since that itself seemed very much like a JSON-powered RPC system.
“Futon” is the event service bus for hook.io and CouchDB. I think this particular community needs some help naming their projects …
“That totally should’ve worked.” No joy … here, though. Sad. 1 more demo to hopefully finish things up.
Am I the only one seeing the freaking awesome implications of a system like this? Twitter, IRC, a browser chat … all talking to one another and broadcasting messages. One point of entry, one API, over 40 different application hooks to broadcast and transport messages. Incredible!
We’ve now established bi-directional communication between the browser and IRC … next we’re adding Twitter.
OK, the browser just piped a message through hook.io into the IRC chat room … awesome stuff!
Now we’re listening to IRC messages, too.
All of hook.io is in active development, and hookJS was just introduced a couple of months ago.
Crap … I tweeted and get to be the “first victim” in the presentation.
Awesome captcha … that looks like nothing you can possibly type.
Setting up hooks to listen to Twitter and IRC at the same time. That’s freaking cool.
Next demo – setting up a quick RSS feed server …
“Sorry, bear with me for just one moment … ” Methinks we failed with the third goal of the presentation …
OK, I definitely will need to build something with hook.io. This is nifty stuff.
(My host might reboot my server in a few minutes … so if I disappear, I’ll be right back …)
Goal for the live coding demos – build an application, process multiple data streams, don’t fail
IPO – Input, Process, Output. Building on this model, you can have a lot of actors that make up an application that is greater than the sum of all its parts.
Sadly, I’m surrounded by Mac users. Seriously, I can see 16 different laptops from my seat and every single one is a Mac!!!
Apparently I’m not geeky enough … he instructed us to curl an address and I immediately went to my browser because I don’t know a better way to do it on Windows
Coming up next, Marak Squires and hook.io in Track A …
I have about 50 different ideas in my head right now … and not enough time to do more than 2 or 3 of them. Maybe I should just spec out some rough details and sell the concepts to the highest bidder …
dNode is the same API for client-server and server-server. And the protocol is entirely abstracted away so you can use the protocol without using the dNode library at all. That’s the biggest difference between dNode and nowJS.
“How do I handle getting and setting closure variables? I don’t …”
Bouncy queries dNode using Socket.IO to route requests from one server to another and act as an on-demand load balancer … that was actually a pretty cool demo. Simple, easy to use, but I think it’s insanely powerful.
It definitely feels like “shared memory through communication” from that earlier presentation. You don’t have to duplicate functionality in different locations so long as you communicate enough to expose and support that functionality in those different locations.
Where dNode shines is in its functionality and its ability to expose functionality you’ve already written somewhere else.
“Most of you are from the Bay area.” I feel special … I’m not
Calling a remote process lists a bunch of data and a method name … I think an improvement would be to also list out parameters for the method. Kind of like a WSDL for a SOAP call. Knowing that the bart system exposes a departures() method doesn’t help me if I don’t know what parameters the method requires/accepts.
There are PHP, Java, Ruby, and Node.JS adapters for dNode. I wonder how hard it would be to write a .Net adapter. JS-based RPC would be huge for my .Net MVC projects.
So long as you use callbacks, it’s pretty much all going to work.
dNode is like a newline-delimited JSON.
Watching someone live code during a presentation makes me feel like a crappy developer …
Ooh … live coding demo. Using vi. And Node.JS. Awesome.
You don’t have to build a routing table or marshal around this stream just to use the callback.
The cartoon crocodile can zig() …
dNode makes it easy to sychronize a flow in realtime.
One of my coworkers wants to learn to be a “hacker” … I don’t think he knows what that means, but it’s entertaining. I should have brought him with me!
I’ll be sitting in Track A after this – first up in 10 minutes is James Halliday discussing dNode.
There’s a lot of test coverage with Derby’s examples and a “pretty cool test suite around what Racer does.”
They Derby framework will be used to build a lot of apps, will have changes to support ACL and authentication, and will provide long-term support.
Derby isn’t trying to answer the “how do you scale to a million user” question. Their first focus is on “how can anyone build a realtime app quickly and easily.” It’s all about getting the API defined first, then they’ll focus on scalability.
Derby can drop in realtime interaction to any web app. Check out a pre-beta demo athttp://derbyjs.com
Demoing a realtime chat app in a room full of geeks with laptops … that was interesting.
The goal of Derby is to provide a way for every developer to build applications that are fully realtime and fully multi-user.
But application schema and data schema aren’t quite the same thing. That’s actually a great innovation and one I’ve already used in a couple of projects.
Data will automatically sync to your database, not just between your client models and your server models.
In the error handler, you can re-try the same action that just failed. I can see how this will help circumvent race conditions by just reapplying changes … but it seems a bit inefficient to just retry upon failure.
Race conditions and conflict resolution. Once you have a lot of concurrent connections, you can run into a lot of them. Makes synchronicity difficult.
I like how the underscore-signifies-privacy convention has now found its way into HTML rendering …
Ooh … “slightly more complicated example”
No, I get it. The client side MVC framework is in JS … so writing the code once for the client means you can re-use the code for a server-side Node.JS setup. Not bad, but it would mean migrating a lot of my existing .Net work if I want to take advantage of the write one, use anywhere paradigm.
“Write your routes once and they work on both the client and the server.” … so write them in which MVC framework? The PHP frameworks and .Net frameworks are different enough that I’m concerned here …
I like the way the Derby markup looks … but now I’m wondering which IDEs support it. I must say, Visual Studio Intellisense has me a bit spoiled when I start looking at web markup and IDEs.
If any of your users don’t have Internet access but have your page loaded, they can still interact with the application because it shares a lot of the server code with the client already.
A to-do application demo in Derby.
Views, models, and routing. Nothing new. Except it’s entirely asynchronous and can connect an input field on one machine in one browser to a display field in another for another user entirely.
Derby is built around realtime and has a component called Racer that works as a realtime data synchronization engine with Node.js.
He’s describing the disconnect between MVC on the server and “really complicated” jQuery on the client side. Sadly, he’s describing the exact problem I was fighting through all of last week …
The after lunch talk is starting. Introducing Derby. It’s a new MVC framework that makes building realtime apps “easy.” I’m every excited about this one!
I think I need to migrate over to the Track A room for the next several sessions … anyone near an open power plug?
And now it’s time for lunch … yay for food! Feel free to track me down at some point and say hi!
I tried using Growl on my iPod when it first came out. Haven’t touched it again since. I really thought it had disappeared entirely until Adam mentioned it just now in his chat.
And now that I’ve compared realtime communication and push notifications to Flash … please don’t shoot me.
Use push to enhance your application. Give your users options and don’t let the technology get in the way of the experience. Reminds me of the advice we’ve been giving developers regarding Flash for years.
Urban Airship has their own push transport … called Helium. This warrants some looking in to …
By the way, I’m working with my VPS host at the moment to correct the 8-minute timestamp issue on these posts. Should be resolved within the next hour or so. In other news, AtumIT
Reminds me of the disconnect between a sandbox PayPal account and a production one … don’t know things will fail until they do
“99% of the problems using push come from a disconnect between development and production.”
From the looks of things, Apple is a great way to learn sockets and push communication. Too bad so much of the code examples are written in Objective-C.
You can request all three of those permissions or just a smaller subset of them. Honestly, even all three isn’t that much … unless Apple expects to extend more permissions to push-enabled applications, I don’t understand why they’d make it that granular in the first place.
On iOS, applications can’t run in the background, so there are just a handful of things you can do: display an alert, add a badge (like an unread count in mail), and play a sound.
Push using iOS and Apple for history and background…
Now it’s time for Adam Lowry and “Connecting the Disconnected” …
An attendee was just asked for his opinion on an issue … “I blogged about it, so you can go read about it.” Um … we don’t know who you are, buddy.
If storing state in a distributed system is a mess … why do we bother making a stateful system in the first place?
For those of you paying attention … the time tags on my updates are actually 8 minutes behind. Not because I’m a slow typer, but because my VPS’ internal clock is off.
Even though ZeroMQ can abstract a lot for you, it’s still too low-level most of the time.
is a socket abstraction layer for messages rather than bytes.
“Don’t communicate by sharing memory; share memory by communicating.”
The trick is being able to answer, “what’s happening now
” not answering “what happened 5 minutes ago?”
It seems to me that there’s a disconnect at the data level. On the one hand, we need to record data quickly to capture all of the real time events that occur in a system. But the discrete events aren’t what’s interesting … its the aggregation of that data that’s interesting. So at the DB level we’re running into a speed issue – speed of recording events and speed of processing complex queries over those events.
In a realtime system, you have very simple bits of state, but the simple systems are more about answering questions regarding multiple
users. What are the trends across the system?
The tricky part with building a distributed system is controlling the multiple points of failure.
And now a quick test post to check that polling refreshes are working …
Testing the AJAX polling system to make sure it’s working properly …
10:49am – Installing a quicky AJAX polling system so you don’t have to refresh any more …
10:42am - 10 minute break until the next session ... Time to breathe for a minute ...
10:39am - Having multiple sharing/interaction features on a site is ineffective. The user is one person, so breaking apart Facebook and Twitter and Tumbler and ... sucks. There are ways to communicate between the two - Khris is recommending we all take a look at Backplane.
10:37am - Realtime is about different aspects of the web. Creating data, delivering data, storing data. But another great focus is processing the data to extract meaningful, marketable information.
10:29am - Realtime is somehow associated to a list of updates presented in reverse chronological order. But that's not all that it is. We need to break out of that stereotype. I agree 100%; there's far more data that can be presented in realtime than just status updates. It's just a matter of providing value so that you're not just collecting data for data's sake.
10:28am - "Once you lock in to one of these units, you can just rip through the rest of the industry."
10:26am - Realtime on NBCs website makes keeping your phone nearby while watching TV essential to the experience. If you're not watching the show and engaged with the site ... you're missing something. Fantastic use of technology!
10:24am - A concept called "push to air" allowed publishers to quickly push content and comments from a live Twitter/Facebook/Forum feed directly to an on-air TV show. This definitely makes it compelling for people to go to the site and interact with the feed. "Hey, I might get on TV!"
10:20am - Print publishers and magazines can do interesting things, but they're all going out of business and disappearing ...
10:16am - Hearing about a simple product "anyone in this room could build in an afternoon," realizing that I could build it in an afternoon, and hearing how much money it was sold for ... I'm frustrated I wasn't there first, but excited that I'm still there in the first handful of people in this industry. Definitely a lot of financial potential here.
10:14am - To stay relevant, old time publishers like the Washington Post are going to need to transition away from a static, "crap" experience and towards a realtime one. The new experience these days is Facebook and Twitter. It's live and realtime. Who wants to go back to an old, static experience after that? I can definitely relate ... I get more news from Twitter than I do CNN ...
10:13am - "The transition from the static web to the real time web isn't just cool and exciting. There's a lot of money there!"
10:11am - "Twitter is doing 230 million checkins a day ... and we'll look back at that later and laugh and think it was a toy."
10:09am - Look for the products and innovations that should be built a year from now or 5 years from now. If you focus on what needs to be built now, you've already missed the boat.
10:08am - Society tells you that you can't predict the future. We think that's crap.
10:05am - "The transition from the static web to the realtime web is as important as the transition from the quill to the printing press."
10:02am - A "nontechnical" presentation at a tech-centered conference? Hmm ...
9:54am - In the mean time, I'm wondering why so many "real time" applications (like the aforementioned Google Reader demo) are only realtime on the server side and not on the client side. It would be huge if my feed could update in realtime. Speaking as a publisher, it would be awesome if I could update my readers in realtime as well. I think there might be a definite use of adding a meta tag to the headers of my documents to link to a realtime hub.
It's just a question of convincing more people to update content for the client in realtime once I start pushing content to aggregators in realtime.
9:53am - Awesome presentation with some live demos. Next up is Khris Loux talking about realtime and revenue. I'll be taking a copious amount of notes.
9:50am - The Google Reader demo was a server-to-server interaction ... not a server-to-client interaction. So while your feed would be updated on Google's system, you wouldn't see an update until you click the Reload button ...
9:45am - Greetings to all of you reading this site in real-time. Google Analytics tells me there are 6 of you at the moment. Realtime web in action! :-)
9:43am - Apparently Tumblr uses a hub to push feed content out to subscribers in real time. I'm now wondering why WordPress doesn't use a similar setup ... and just finally discovered a potential business use for SwiftStream at the same time ...
9:40am - We use there parties: the publisher who has the data, the subscriber who wants the data, and the hub that routes data between the other two.
9:39am - The only widely-used protocol on the web is HTTP, even though there are better protocols out there. So to make a realtime web on a large scale, we'll need to use what's already available and ubiquitous.
9:33am - A clock is a real-time realtime example. You could poll it ... wake up every minute and see if it's time to get up, or just wait for the alarm to go off instead. I'd rather wait for the alarm.
9:32am - Realtime doesn't mean it has to be now ... realtime can be really slow.
9:30am - "The key to real time is to be like the kid in the backseat asking 'are we there yet are we there yet are we there yet ...'" I'm impressed, Julien must read my blog.
9:25am - First session is a little late, but looks to be pretty good regardless. I heard Julien talking up his presentation during breakfast, so I'm looking forward to it. Now that they've gotten the microphone working, that is ...