We realized that deploying publishing newsletters on Friday was not a good idea, so we updated our schedule. From now on, expect our updates on the first Tuesday of every month. And today is the day 🗓.
🎃 Tricks or treats?
We’re glad to announce our new initiative—bringing reliable and scalable real-time features to serverless JavaScript applications. Describe your real-time logic using a powerful and proven abstraction—channels (via the anycable-serverless-js package), deploy JS code to a serverless platform, and AnyCable-Go to a cloud, and let the magic of AnyCable do the rest. In the full-featured example, a Next.js chat application runs on Vercel, powered by AnyCable running on Fly.io: vercel-anycable-demo.
Together, Vercel, Fly, and AnyCable make this setup as easy to run as using a PaaS solution but with all the benefits of on-prem: cost, security, flexibility, and more! Want to share this link with your JS friends?
Posts
The future of full-stack Rails: Turbo Morph Drive and Turbo View Transitions
In the series, we take a look into the future of Hotwire-driven applications and explore exciting new features like DOM morphing and View Transitions.
Supercharge your app: latency and rendering optimizations in Phoenix LiveView
The blog post goes deep into the details of Phoenix LiveView implementation and thoroughly explains why and which optimizations have been applied to the underlying communication protocol so the amount of data sent over the wire is as tiny as possible. One question may arise after reading this: “Is it worth the effort?” Not sure about this one (probably), but some relevant efforts were considered worthless..
Exploring server-side diffing in Turbo
The team at 37signals revealed their experiments with sending HTML diffs over the upcoming Turbo feature—page refreshes (we mentioned it in the previous issue). The feature implies broadcasting signals to active clients to refresh the current page when some content has changed. The idea was to minimize DOM updates (not the amount of data sent) by calculating full-page HTML diffs on the server (not as sophisticated as LiveView’s approach). After some experiments, the idea was rejected in favor of client-side morphing (see the “Videos” section below).
Videos
Untangling cables and demystifying twisted transistors
Making a difference with Turbo
Hotwire Cookbook: Common Uses, Essential Patterns & Best Practices
Releases
anycable · 1.4.3 / anycable-rails · 1.4.2 / @anycable/turbo-stream · 0.4.0
These releases bring new broadcasting capabilities to AnyCable applications: broadcasting to others and batching broadcasts.
This release adds a new storage adapter for reliable streams that uses NATS JetStream under the hood. The best thing about it is that it works with the embedded NATS feature of AnyCable so that you can have a multi-node cluster with streams history support and zero external dependencies.
actioncable-enhanced-postgresql-adapter · 1.0.0
This is a custom PostgreSQL adapter for Action Cable, which mitigates the 8kb limit for broadcast messages. We may expect this functionality to be upstreamed into Rails in the future (see this issue for discussion).
Frame of curiosity: cache for WebSockets
Basecamp’s idea of page refreshes and the (unexpected) amount of positive reception from the community made me call into question some principles I advocated for with regard to real-time applications.
For example, I always was against the signaling pattern: delivering signals to all active clients to re-request data. Why I didn’t like it? Because I usually think about any feature in terms of performance and load, and I also know that the worst kind of DDoS attack is a self-DDoS, the situation when you accidentally turn your users into attackers. Whenever you instruct active users to perform an HTTP request at the exact moment, you hit your server with a hammer. (And when you put broadcasting into an Active Record model callback, you potentially make this hammer beat at the speed of punk or mathcore).
But what if we stop thinking in high-load terms and consider a small-to-medium web application where the number of users awaiting live updates on a particular page is low (dozens, not thousands)? What if we spice this idea with an extensive caching system to reuse the response payloads as much as possible? Then, we can take a look at this pattern from a different angle (probably 37° 😉).
Re-fetching data is much simpler than dealing with different user contexts and representations for different live updates. This approach is as robust as hitting “F5”—the very first “live updates” implementation. Just make sure your HTTP requests are fast enough to handle such load spikes. And here comes cache…
Caching at the HTTP level (302 Not modified) and application level are well-covered topics (check out, for example, this recent talk on caching at Dev.to by Ridhwana Khan). But I have never heard of caching at the WebSocket level.
Wait, what? How can you cache WebSockets? That doesn’t make any sense, right? Yeah, sounds like a GPT hallucination, for sure. But let’s give this surreal idea a chance.
What do we use cache for? We keep fragments of data in cache for re-usability (and to avoid re-calculation). Usually, cache is used to speed up read requests (GET). In WebSockets, we do not have neither requests or responses, only messages flowing in each direction independently. In practice, we do not treat WebSockets like that. Instead, we come up with communication protocols, and such protocols may implement something similar to request-response interaction. Thus, there is room for bringing caching ideas to life.
Let’s consider the Action Cable protocol as an example.
We expect a client to send commands to the server that might be treated as requests. What are the responses we may try to cache? For the subscribe
command, the response may be a combination of the confirmation message and the stream name to subscribe the client. Let’s consider a canonical example:
class ChatChannel < ApplicationCable::Channel
def subscribed
stream_for ChatRoom.find(params[:room_id])
end
end
And the corresponding “request” and “response”:
Request:
{identifier: ”{\”channel\”:\”ChatChannel\”,”room_id”:2023}”, command: “subscribe”}
Response:
{
transmissions: [{identifier: “…“, type: “confirm_subscription”}],
streams: [“chat_room/2023”]
}
The response is uniquely identified by the request, it’s context-free. So, why not calculate it once and reuse every time a client sends a matching command? That’s what we could do if we had a WebSocket proxy server with caching capabilities. Like AnyCable.
AnyCable doesn’t have any kind of caching capabilities yet, but adding them would be pretty straightforward, given that we are already modeling client-server communication in request-response terms. The corresponding API for per-command caching might be as follows (inspired by Action Policy):
class ChatChannel < ApplicationCable::Channel
cache :subscribed, expires_in: 15.minutes
def subscribed
stream_for ChatRoom.find(params[:room_id])
end
end
We can also introduce something similar to HTTP caching (conditional requests) to WebSocket commands. It can be used for data retrieval actions:
class ChatChannel < ApplicationCable::Channel
def fetch_history(data)
room = ChatRoom.find(params[:room_id])
return unless stale?(room)
transmit({messages: retrieve_history(room, from: data["from"])})
end
end
Again, the result of the command execution may be stored at the WebSocket server side with some ETag attached. The ETag is sent along with the command, and the server may respond with a “Not modified” response and skip the heavy operation.
Does WebSocket caching still sound laughable to you?