Today on Engineering Together, I sit down with Jamie Newcomb, Head of Product Operations and Delivery at Ably, to explore how AI, particularly LLMs enhanced with Model Context Protocol (MCP) integrations, is reshaping engineering analytics and operations.
Together we dive into Jamie’s role as Ably’s AI steward and how his own team is rapidly adopting AI tooling to turn experimentation into measurable outcomes. From relating metrics from Jellyfish with data from other systems like Confluence, and Metabase, to enabling faster, more strategic decision-making, Jamie details how Ably is pushing the boundaries of what’s possible with AI-powered insights.
We also discuss what this means for the future of engineering management, how AI agents are reshaping workflows, and why companies that embrace AI-native thinking are set to lead the next era of productivity.
You can read more about Jamie and Ably’s exploration with Jellyfish’s MCP here.
The following transcript has been edited for clarity.
Adam Ferrari: Welcome to Engineering Together, my corner of Substack where we talk about engineering leadership topics. Today, I'm super excited we’re joined by Jamie Newcomb, who heads up product operations and delivery for Ably. Ably is the sort of amazing, powerful and scalable platform for real-time user experiences. Hey, welcome Jamie.
Jamie Newcomb: Hey Adam. Thanks for having me today.
Adam: Yeah, really psyched for the conversation today. So today we're going to be talking about MCP in the context of engineering analytics. A little bit of backdrop here. MCP is model context protocol. This is sort of the powerful framework that Anthropic introduced last year that allows LLMs to reach out into external systems to pull in more data or take actions. It's been an incredibly hot topic because it obviously generalizes the capabilities of LLMs a lot and I've been doing some advising work with Jellyfish and Jellyfish is deeply engaged with the whole AI in software engineering revolution. So Jellyfish was quite early looking at MCP and back in April, Jellyfish released an MCP integration that allows LLMs to hit the Jellyfish API you can pull in engineering analytics data that Jellyfish produces, which unlocks all kinds of interesting stuff. Like you can have the LLM, you can ask questions about how your productivity is, are there bottlenecks, stuff like that.
We thought it was super exciting. Of course you create something like that and you've got all these dreams of how it's going to be used and you hope that people show up and actually use it, but you don't know for sure. Jamie answered that first question of would people use it by picking it up and wrote a great blog about it that I thought was just fantastic short coverage of what you had done, but it was packed with kind of exciting. I mean it was like, of course you can imagine some very easy having a chat based interface to data, but it was a lot more interesting than that. So I thought it would be great to dive deep on that topic today of MCP and how that's unlocking LLM capabilities, AI capabilities for engineering management use cases and sort of the use case or the case study that you represent, Jamie and having picked that up.
Yeah, let's dive in and maybe before we get into the nitty gritty stuff about MCP and ai, before we dive deep, we could start out with some basics. Maybe you could tell us about your background and how you got to Ably.
Jamie: No, sure. So yeah, thanks for having me on today Adam. So I've been working in tech now for around 10 years. My first role was at Arm where I was a project manager within platforms and infrastructure. The company went through some great growth there pre and post SoftBank. I spent around four years there, and then five years ago an exciting opportunity came up at Ably and so I joined as the first product delivery person. It was just post series A and now I'm the head of product ops and delivery. So what do I do? What am I accountable for at Ably? Well, I'm ultimately accountable for the delivery of our product outcomes, ensuring our teams stay adaptable and effective, aligning the teams with investment targets and ensuring we stay adaptable and effective in how we work. And more recently I've taken on a key role in leading AI adoption at Ably where I'm encouraging teams across the company to adopt AI tooling and exploring new ways we can leverage AI internally. This started just with engineering and now it's spread to the whole company.
Adam: That's great. That's great. And the productivity stuff you mentioned, that's I think the clear tie-in of how you got engaged with Jellyfish. So this is how this story starts to unfold. Actually, before we dive in on the AI and MCP stuff, maybe just for folks who aren't familiar with Ably, you could say a little bit about the company. What customer problem do you solve? What do the customers look like? What's the platform like?
Jamie: Yeah, sure. So we provide real-time messaging APIs and infrastructure that power life features in apps. Some of these use cases might be live chat, realtime collaboration, live updates and notifications. There's obviously lots of products which do this, but I think what sets Ably apart is that when developers need to build these experiences at scale reliably, they choose Ably. That's where we really come to the forefront. Customer base ranges across many different industries, large and small customers. Some of the names are HubSpot, Genius, NASCAR. In terms of numbers, we reach around 2 billion devices per month, quite a lot of devices. In terms of where we are currently, so we're focusing on getting traction with some of our new products. So they're chat and live objects, everyone should definitely check them out. Shameless plug there and improving the developer experience across all our products as well. That's where our focus is at the moment.
Adam: That's great. That kind of generalizing and upleveling, that sort of pub sub capability into higher level capabilities. Yeah. Very cool. What's the engineering organization? How big, how's it structured? How do you guys work?
Jamie: Yeah, sure. So we have just short of 30 engineers, and we're organized by our products - the products I mentioned. Included in that 30, we've got a platform team who provide the services for our products, and then each team is supported by a product role. We don't have engineering managers at Ably. Our tech leads also manage the team at the moment, so we have very busy tech leads.
Adam: Yeah, today's conversation is a lot about engineering analytics. Are the management different management roles, are they engaged with sort of the engineering metrics including stuff that comes out of Jellyfish?
Jamie: Yeah, I'd say they are. It was low adoption to begin with, but as we've become more data-driven as an organization, we're getting our leaders onto things like Jellyfish and looking at the metrics as well.
Adam: You mentioned earlier, I have to pick this up, part of your mission is to drive AI adoption. That's fascinating. Obviously super relevant to this conversation, but also just more generally interesting. I'm wondering if you could say a little bit about that part of your mission. What are your goals? What kinds of activities are you driving? How's it going? I'm just curious.
Jamie: Yeah, so just some background on this. I'm sure lots of people are feeling this, but there's a general feeling amongst CEOs that all other companies are making significant gains with AI and no one wants to be left behind. And I think they're right. I generally believe that companies who don't adopt will be left behind. The gains that can be made with these tools are a step change and I don't think we can ignore this at the moment.
So at Ably we want everyone to take this same mentality and we have three pillars of AI work at Ably at the moment - one of those is AI adoption. And when we say AI adoption, this isn't about asking ChatGPT a question. We've been doing that for a few years now and that doesn't really touch the sides. We are experimenting with lots of different tools but we want to go beyond that now.
In engineering, we have engineers using Claude Code, Copilot, Cursor, and they're seeing significant gains with what they can do. But now we want to push this more company-wide. So we're basically asking teams across the organization to look at their processes and workflows and really investigate how AI can help get big productivity and efficiency gains. The bottom line is if we can get AI supporting or replacing repeatable tasks, it means people can focus more on strategic thinking. At Ably we're fortunate we've been given a decent budget to do this, so we need to take advantage of it.
One of the things that's been discussed a lot at the moment is how do we measure this? How do we measure if this is working? I think it's twofold. You can look at raw usage - are people using the tools? But then you start asking the question, what impact are these tools having? I've been thinking about this a lot and I don't think we need to look for new ways to measure these things. Just stick with the measures we're currently using.
So teams might measure productivity by looking at velocity - just stick with the current velocity metrics, compare the pre-AI world to the AI world. For teams in engineering, it's quite easy to look at velocity metrics. That's something we do. Outside of engineering they might start looking at time spent on a process and they could start measuring that before and after AI tooling. So I don't think we need to overcomplicate it - I always say keep it simple. Let's just use the current measures we're using rather than trying to invent new ways to measure effectiveness.
Adam: Yeah, I think there's a lot of wisdom in that. I also expect probably there's going to be a desire to just also add some telemetry about how the AI is being used so you can correlate what tools and adoption drive those outcomes. But yeah, I agree fundamentally it should be making our existing metrics better.
So this is great. This is a good jumping off point. It's interesting because it sounds like you're simultaneously continuing to encourage experimentation and also a big part of your mission is to then graduate some of those experiments into production improvements, actually get stuff integrated into the day-to-day work, but experimentation is still there and you wrote up a great blog about how you picked up Jellyfish’s MCP. So just as a quick backdrop there, Jellyfish provides an API, you can retrieve all sorts of analytics, those productivity metrics that you mentioned, you can retrieve those all by the API and the MCP then makes that API accessible to AI large language models. We put out that very basic, just an initial cut, but you picked it up, I mean in minutes practically, and I'm just curious what your thought process was in giving it a try and what you tried. Maybe you can give, I think people should definitely check out your blog, but maybe you can do a quick recap of what you did and how you got there.
Jamie: Yeah, sure. I completely agree - I picked it up quickly. Like I've said in this call already, we're in that experimental phase, so if I'm trying to drive adoption internally, I also need to be showing how I'm using AI tools. With the Jellyfish MCP, I started with easy data access questions. So what are the current investment levels? This is what brought us to Jellyfish in the first place - the whole investment allocation side. For me that was a nice easy question that I thought it should get right. And it did.
Then I moved on to questions which require more analysis. I picked one of our teams in engineering and asked how is developer productivity in the chat team this month? This is where it started getting impressive because it identified all the relevant productivity indicators. It could have just listed every metric in Jellyfish, but it didn't - it showed the key productivity metrics we probably want to be looking at for this type of question. It then provided explanations for performance patterns. It connected the insights across different Jellyfish data points, and then it started connecting productivity to the investment levels, and also connecting the productivity metrics to the deliverable timelines. It said this outcome might be delayed because developer productivity during this time period had reduced.
I think that's where it's really cool because it was right as well. I was getting quite excited by this and I got the chat tech leads on the call and said, look at this thing I found. We basically concluded it was correct. Productivity was down because the team was split across multiple investment categories and the outcomes had just kicked off, so obviously they weren't in that phase of churning through lots of lines of code and getting through tickets.
Adam: Yeah, this is really what stood out to me when I read your blog was I think I went to it expecting that it would've gotten it basically a chat interface to the data, which is cool, and that's a nice thing to have, but okay, is that really going to help you delve deeper or highlight problems? But what you actually got was this, and I think this is a story that plays out again and again with LLMs is there's a shocking reasoning capability there. So it could dig in almost like having an analyst assistant and give you on the one hand better the sort of clarity explanation about why that metric was and then also suggest what were the root causes and where you could investigate further. That's pretty wild. The other thing, the next phase that you covered that I also thought was, it was not surprising after I heard it, but it was so interesting and surprising when I first read it, was how you started to integrate additional MCP sources, like you mentioned Confluence in the blog so that you could pull in process documentation, project documentation, and so could, for example, I think one of the examples you used was it correctly determined that a project was early in its lifecycle and so the workload you were seeing was not surprising.
Or if you didn't have that context, if you had just had the Jellyfish numbers, you might say, Hey, this project is maybe off the rails, but having that other context. So I'm curious, maybe you could talk a little bit about what you saw from the other MCP integrations.
Jamie: Yeah, sure. Obviously once I set up one, I was like, well, what more can we do with this? I went ahead and adopted more, and this is where I could really see the value and how powerful this could be. I went from asking Claude about Jellyfish insights to having Claude look at our processes in Confluence, finding process inefficiencies based on the documented processes and then giving suggestions for process improvements. This all happens within a few minutes and would take substantially longer for a human to do.
I think the main thing I asked about was PR cycle time that may have been higher in one team, so the Confluence MCP looked at our PR process in Confluence, started connecting the dots and then gave some suggestions for where we may want to improve that process. If we hook up all these internal tools, we have an LLM which is aware of how we set out to work, how we are actually working, what the metrics show, and the impact this is having on our goals and outcomes. That's where it ties back to why we're here - to deliver the outcomes and ship new products. I think this is a real big step forward in how we can understand inefficiencies in our workflows.
Adam: No, I'm curious. It makes so much sense, but it's also, so science fictiony, curious in your role, you touch all the systems of R&D and I'm curious as to if you had to maybe make a top handful, top four or five systems that you'd want to see MCP integrations into this environment for, what are the ones that are top of mind for you that round out the capability?
Jamie: Yeah. Well I think in my role, from the product delivery side, we've already got Jira and Confluence. Obviously Jellyfish brings even more metrics in, but then I started trying other ones as well - maybe not exactly applied to my role. I started looking at Metabase, which I'm sure people have heard of, and then Gong as well because someone in our marketing team was really interested in if we could use it with Gong. For those who don't know, Gong is basically a call recording tool we use to get insights on customer calls.
In terms of Metabase, the LLM was able to produce queries all within Claude and create dashboards all visible in Claude. I didn't even have to go into Metabase - and our Metabase is in a bit of a messy state at the moment. It started combining these Metabase dashboards with product usage data we have and then the deliverables in Jellyfish as well. Obviously we don't expect product usage data to be in Jellyfish, but it was able to correlate when we got stuff done with the product usage data.
Gong was useful for retrieving insights from customer calls, so asking it questions like: can you tell me who has shown interest in this certain chat feature? Can you tell me prospective customers who have a strong use case in the fan engagement space? It was able to provide exact quotes from customers, timestamps and links to calls. Obviously if our marketing team uses it more, they would tie that into other tools they use in this space as well.
Adam: The Gong one's fascinating and both in terms of I think pulling voice of customer, you could get signal on features but also outcomes. You could then look at go calls and say, oh, what did we ship? That's resonating such a hard question in the past you'd have to go listen to a hundred calls to really figure it out, and now you can have an LLM go and do all that reading and summarizing for you.
Jamie: You touched on future priorities as well. I think where I'd like to go next is more into the engineering development flow, so hooking up things like Sentry - there's a Sentry MCP, there's a Grafana MCP, so these are some of the things I'd like to explore. There's even a Slack one, so you could get sentiment analysis from an engineering team in Slack, which would be useful. I think the next step is to really look at where these can help in our engineering development workflows.
Adam: Yeah, yeah, those make total sense. It's a good transition. I'm curious, we were talking a lot about the nuts and bolts of what this can do and how it works. I'm curious now based on what you've seen, you're on the bleeding edge of using MCP and LLMs in sort of an engineering management or R&D management setting, and that's a pretty new concept and I'm curious as to if you started to have thoughts about how this is going to impact the engineering management role, will we have a continuous assistant at our side that's constantly looking at the metrics and highlighting what we should pay attention to? Will we wake up with a daily report from the LLM about where we should go focus our attention? How does this change the engineering management role? Especially I think the backdrop here is that role is already changing. There's trends around flattening management structures and increasing the number of IIC headcount per manager, there's a lot that we're trying to ask our managers to do more. Does this play into that trend and what does it look like?
Jamie: I think we'll see AI agents provide context-rich insights rather than just alerts. Some examples could be noticing deployment frequency dropping and automatically correlating it with team changes and code complexity trends.
In terms of engineering managers, I think it dramatically shifts the role from - like I said, we don't have engineering managers at Ably. Our tech leads do the role, but they obviously have similar responsibilities. So I think it shifts the role from data gathering to more strategic thinking. The AI will handle what's happening, and this will allow the managers to focus on what should we do about it and growing their people.
For larger teams, AI enables this, but there's always a ceiling, isn't there? We still need that human connection for one-to-ones, coaching, mentoring, the development side. So if I was to guess, if you've got a team maybe of six to eight reports, this might enable it to go to 10 to 12, but I don't think we're going to see a jump up to 20 anytime soon.
Adam: That really resonates for me. Yeah, especially what you said about it changing where you can spend the attention on maybe more on growing people, I think it can enable you to be a better manager for a similar sort of scope of team rather than saying, let's make people do Herculean amounts of management.
Jamie: Yeah. I think with all these things, the most successful managers are going to be the people who partner with AI while doubling down on the things which really do require that human leadership. I think if we have people who don't want to adopt these AI tools, they're probably the ones who are going to feel it the most because you're a lot more valuable to an organization if you can make yourself more effective with AI tooling, but then also spend more time on that leadership and management side of things, which is really where we get the value from.
Adam: Totally agree. I'd love to, as we wrap up, take it to maybe a little bit of a philosophical question. Again, you're on the bleeding edge with using MCP in a real world context and you're talking about all these systems, these SaaS applications that are part of your environment. There's been a lot of discussion about as we then are able to connect LLMs out to all these systems and sources of data and that takes over some of what humans are doing in those systems, maybe that means we need less out of our SaaS applications. Maybe some of the SaaS applications go away or they're differently conceived. They become more of management of the data and the backend business logic rather than these use case custom user interfaces, the LMS maybe get direct access to APIs and are able to do a lot of that on our behalf and we interact with the lms. I'm curious as to now you're in it. Do you have some early point of view on that? Do you think it's a hybrid or you see it shifting heavily that direction?
Jamie: Obviously there'd be some products which will probably go away, but I don't think this is about seeing the end of SaaS products in this space. I think it's going to be more of an evolution towards AI-native products that serve both humans and AI agents through protocols like MCP, what we've been talking about today. Obviously with my multi-system setup, it can connect insights across Jellyfish, Confluence, Gong. These are all correlations that would take humans hours, but you still need leaders to interpret what those insights mean. You need these leaders to know where to invest in tooling, adjust team structure or change processes. So instead of replacing these interfaces, I think we're heading towards, like you said, a hybrid model. I definitely agree with that, where AI handles the data integration, the pattern recognition, and humans focus on strategic decisions. I think companies that win in this space are going to be the ones which design for both from the start.
And those more mature products which are already out there - I think they need to start pivoting or adapting as soon as possible because that is the world we live in now, and Jellyfish is one of those; there's lots of new AI features that have come out in the last few months, and then they made the MCP server available. Obviously, companies which pretend this isn't a thing, they're probably the companies which will go away, and I don't think we should see that as the end of SaaS - they're just not adapting to where we are today.
Adam: Yeah. Yeah, that's great. I think that's a pretty smart take on it all and a good place to leave it. I can't thank you enough. I think it's a fascinating conversation. Also, I think there's a special good karma that comes for early adopters. Thanks for picking up that MCP and doing really not just basic stuff, really interesting stuff. You delved deep with it and got some really fascinating insights. I hope you keep rolling with it and keep pushing the Jellyfish crew if you need to see different or more out of it. We want to keep going around this and make it better and better. Thanks a lot for doing this today and I really appreciate it.
Jamie: Thanks Adam. Thanks for having me.
Share this post