Recent Posts
Hardware Asset Management Is the IT Discipline Most Organizations Do Badly
Hardware asset management — knowing what physical devices the organization owns, where they are, who has them, what software is installed on them, and when they need to be refreshed or retired — is foundational to almost every other IT function. Security teams need accurate asset inventory to understand their attack surface. Support teams need device configuration data to resolve issues efficiently. Finance teams need asset records for depreciation and insurance. Procurement teams need lifecycle data to plan refresh cycles.
Low-Code Platforms Have Found Their Ceiling
Low-code and no-code platforms arrived with a promise that has been partially delivered and significantly oversold: that business users without programming backgrounds could build the software applications they needed without depending on IT development teams. The partially-delivered part is real. Workflow automation tools like Power Automate, Zapier, and Make have genuinely enabled business users to build integrations and automations that previously required developer time. The oversold part is the claim that this capability extends to applications of arbitrary complexity.
Remote Support Has Changed What Good IT Support Looks Like
The IT support model that existed before 2020 was built around physical proximity. The helpdesk sat in the office building. Employees who needed support walked to the helpdesk or the helpdesk walked to the employee. Hardware issues were resolved by hand. The model had inefficiencies — the helpdesk was idle when nobody needed support, and wait times were unpredictable — but it had a ceiling on support complexity that physical access naturally enforced.
Server Hardware in the Cloud Age Has a Different ROI Calculation
The cloud versus on-premises debate has settled into a more nuanced position than its early framing suggested. The argument that all workloads should move to cloud and that on-premises infrastructure would become obsolete was oversimplified. The organizations that moved all workloads to cloud and discovered that certain workload categories are more expensive to run in cloud than on-premises have been quietly repatriating those workloads for several years.
The current reality is a hybrid infrastructure landscape where the economic decision about where to run a workload depends on its specific characteristics — compute intensity, data volume, access patterns, regulatory requirements, and predictability — rather than on a blanket preference for either delivery model. Server hardware investment in this context requires the same rigor as any capital investment: a specific business case for the specific workloads that the hardware will run.
The Vulnerability Management Backlog Every Organization Has and Nobody Talks About
Vulnerability management programs have a dirty secret that annual security assessments and compliance audits politely decline to examine: the remediation backlog. Organizations that have deployed vulnerability scanners — Tenable, Qualys, Rapid7 — know their vulnerability count precisely. Most of them have more open vulnerabilities than they will remediate in the coming year. Many have more open vulnerabilities than they will remediate in the next three years at their current remediation pace.
AI in Enterprise IT: Where It Is Actually Saving Time
Enterprise IT has adopted AI-assisted tools at an uneven pace across the four functional areas. The adoption unevenness reflects a genuine difference in the maturity of AI applications across contexts — some IT functions have clear, measurable AI use cases with documented productivity gains, while others have AI vendor claims that have not translated to operational reality at the scale most enterprises require.
The honest assessment of where AI is saving time in enterprise IT is narrow but real: specific use cases within IT support, security operations, and software development assistance have demonstrated consistent productivity gains. The broader claims — AI transformation of IT operations across all functions — remain future-oriented rather than present-tense.
The IT Budget Allocation Problem That Keeps CIOs Up at Night
The IT budget allocation problem is structural, not mathematical. Organizations that spend the right total amount on IT frequently allocate it incorrectly across the four functional areas — run the business, grow the business, transform the business, and maintain the infrastructure that enables all three — producing technology environments that are simultaneously overspent in some areas and critically underfunded in others.
The allocation pattern that is most common and most damaging is heavy spending on new software and technology initiatives with insufficient investment in the support, security, and infrastructure maintenance that determines whether those investments function reliably. An organization that spends aggressively on digital transformation while deferring network infrastructure refresh, understaffing the helpdesk, and running security with inadequate tooling has not made a strategic trade-off. It has made an accounting error that looks like a strategic choice.
BYOD Policy Has Produced Security Problems Nobody Wants to Own
Bring Your Own Device policies were adopted by enterprise IT organizations under pressure from employees and leadership who wanted to use their personal devices for work and did not want to carry two phones. The policies were designed hastily, implemented with tools that were not ready for the management requirements they needed to meet, and left in place with minimal review as the security landscape changed around them. The result is a policy category that most IT security professionals acknowledge as a significant exposure and most organizations decline to address because addressing it requires telling employees they cannot use their personal devices for work.
The Network Infrastructure Debt Most Organizations Are Quietly Carrying
Network infrastructure occupies an unusual position in enterprise IT budget conversations. It is essential — nothing in the technology stack works without it — and invisible when functioning correctly. The invisibility is the problem. Network hardware that is approaching or past its end-of-support date, running firmware that has not been updated in years, and operating at utilization levels for which it was not designed accumulates risk silently. The incident that reveals the accumulation is not gradual. It is sudden.
Ransomware Recovery Is Where Security Programs Actually Get Tested
Ransomware preparation is the security investment that organizations discover the quality of during the worst possible moment. The backup strategy that was designed but not tested reveals its gaps when the organization needs to restore from it. The incident response plan that was written but not rehearsed reveals its gaps when the team is trying to execute it under pressure. The cyber insurance policy that was procured but not fully read reveals its requirements when the claim is filed.