Four Rules for Enterprise Application Performance

Cloud Performance

Four Rules for Enterprise Application Performance

By TMCnet Special Guest
David Seidman , Director of Product Management, BlueStripe Software
  |  April 06, 2015

In my role as Director of Product Management at BlueStripe Software, I get to see how large-scale distributed enterprise applications are designed and how they work (and don’t work). I engage with customers and get to see their distributed applications up close. I’ve personally reviewed hundreds of application deployments across a variety of industries, application types, technology stacks, and architectural philosophies. And I’ve been lucky enough to have the opportunity to talk with IT Executives about the problems that concern them the most. 

From this vantage point, I’ve developed four rules for enterprise application performance – and some observations about styles in building enterprise applications.

Rule 1.  If something is technologically possible, it’s being done in a data center somewhere.

Large enterprises do amazing things. We’ve seen servers using many thousands of processes, ports, network connections, and disks, sometimes with extremely rapid connection establishment and teardown. We’ve seen load balancers and secure encryption used between every tier of a distributed application, and we’ve seen these technologies used nowhere in an application – sometimes within the same vertical.  We’ve seen a single web transaction trigger 20,000 SQL transactions, intentionally and by design. We’ve seen transaction errors deliberately used to trigger ordinary application control flow, and ATMs running an entire multi-tier application inside a mid-sized desktop PC. There is a tremendous variety in the possible architectural decisions that can be made – and there are many different paths to get to the same result.

Rule 2. Most architectural choices are neither right nor wrong – but they are often surprising.

Enterprise IT departments make lots of choices – almost always for reasons that make sense for their businesses at the time. As an outsider who sometimes looks at customers’ well-established applications, these reasons aren’t always immediately clear. IT operations teams often have the same “who designed this thing anyway?” reaction when they look under the covers at how their own longstanding in-house applications are put together. (New employees are particularly susceptible to this reaction!) When working on firefights or application audits, it is important to avoid quick judgments and just stick to the facts. Sometimes, we’ll see something that looks strange to us, but it’s entirely by design and the application team has no interest in changing it. Other times, we’ll see behavior that could be entirely explainable and commonplace, but it shocks the architect and leads to rapid changes.

Rule 3. Many application performance problems are the result of choices that clearly ARE wrong.

There’s a big difference between an architectural choice and a mistake. I’ve seen production applications that send DNS calls to development servers, and applications from many different groups within a business tapping into the same database without ever clearing it with the actual database owner. We’ve seen IPv6 address resolutions occurring by the thousands per hour in data centers that shouldn’t be using IPv6 at all, and nobody seems to know why. We’ve seen servers running for years where no one knew what the purpose of the server was, but everyone was afraid to decommission or move the server for fear of breaking someone’s application. These are all examples of outright mistakes – they happen frequently.

Rule 4. Application Visibility drives application performance improvements.

Regardless of whether the source of the issue is an architectural decision that made sense at the time (but now causes problems), misconfiguration of infrastructure resources, or a bungled transition from development to operations, visibility into the actual workings of distributed applications gets people to see the problem and the solution quickly. 

The breadth of application technologies is vast, and covers transaction protocols, server and network internals, middleware and database products, legacy technologies, and programming languages. To make sense of it all, the key is to combine hard facts about dynamic application infrastructure – which BlueStripe specializes in – with the knowledge and history that exists in each enterprise.  Amazing improvements in performance and availability are possible when these two worlds are joined.

David Seidman is the director of product management at BlueStripe Software


Edited by Maurice Nagle
Get stories like this delivered straight to your inbox. [Free eNews Subscription]