Comprehensive EA Analyses Catalog for a Large Enterprise (Legacy, SaaS, Multi-Cloud)
Enterprise Architecture provides the holistic view required to align IT investments with business strategy, optimize costs, reduce risk, and guide transformation in complex enterprises. This catalog presents a practitioner-grade set of EA analyses structured by four perspectives (TOGAF-aligned, Zachman-inspired, Capability-centric, and Modern/Framework-agnostic). For each analysis, the description, importance, typical outcomes, and EA tool implementation guidance (data objects, metrics, and visualizations) are detailed. A consolidated priority rankingfollows, indicating where an EA team in a large, complex enterprise should focus first.
Cost & Complexity Reduction
Application Rationalization can reduce maintenance costs by up to 20% in the first year (per Gartner estimate as cited by N-iX), eliminating redundant and outdated systems to free budget for innovation.
Business-First Alignment
Capability Maps highlight where IT supports or lags business needs, focusing investment on high-value gaps and surfacing duplication of applications.
TCO & Financial Stewardship
Embedding Total Cost of Ownership into EA shifts the conversation from “Can we implement this?” to “Should we operate this for the next 5–10 years?”.
Risk Visibility & Resilience
Lifecycle and Risk analyses reveal end-of-life tech and compliance exposures early, while EA bridges business and IT to ensure interdependencies are recognized and gaps addressed.
1. TOGAF-Aligned Analyses
Structured around TOGAF’s Architecture Development Method (ADM) phases and deliverables, emphasizing portfolio optimization, lifecycle management, roadmaps, and risk governance.
1.1 Application Portfolio Rationalization (TOGAF Phase B/D – Business & Application Architecture)
Description: A systematic review of the enterprise software application landscape to identify redundancy, legacy inefficiencies, and strategic misalignment. IT Portfolio Rationalization, guided by Enterprise Architecture, transforms what could be a simple cost-cutting exercise into a strategic lever for business agility. Organizations typically use frameworks such as Gartner’s TIME model (Tolerate, Invest, Migrate, Eliminate) to categorize each application’s disposition. In the TIME model, the technical fit pertains to the quality, maintainability, and compatibility of the application, while the functional fit refers to how well it aligns with and supports business capabilities. The four quadrants are defined as follows:
| TIME Quadrant | Technical Fit | Functional Fit | Description |
| Tolerate | High | Low | Not strategically valuable but technically adequate; maintained as-is until a better time for change |
| Invest | High | High | Integral to operations, high-quality; upgrade, expand, or integrate further |
| Migrate | Low | High | Functionally important but technically inadequate; replace with modern or cloud-based alternatives |
| Eliminate | Low | Low | Poorly performing and no longer aligned with business; targeted for removal |
An alternative decision matrix categorizes assets as Strategic (invest/innovate), Essential (maintain/optimize), Legacy(migrate or retire), Redundant (consolidate or retire), or Excess (retire immediately).
Importance: Cost efficiency, security, operational efficiency, and agility are the key drivers. Without a disciplined approach, IT environments suffer from entropy — new applications accumulate without retiring legacy systems, creating a bloated, difficult-to-manage portfolio. Rationalization is not merely about deletion; it is about making informed decisions regarding the lifespan, function, and value of every technology asset. Per a Gartner estimate (as cited by N-iX), organizations can achieve up to 20% savings in application maintenance costs within the first year of rationalization. The TIME model is widely used for application rationalization, IT strategy development, IT budget planning, cloud migration planning, M&A portfolio evaluation, technology risk management, and vendor management.
Typical Outcomes:
- Disposition decisions for each application (consolidate, decommission, modernize, invest)
- Retirement planning, migration strategies, and stakeholder communication plans for execution
- Reduced license fees, maintenance contracts, and infrastructure overhead; minimized security attack surface; streamlined integrations
- Ongoing governance via Architecture Review Boards, regular audits, and policy enforcement to prevent re-accumulation of debt
EA Tool Implementation:
- Data & Relationships: Comprehensive Application inventory (owner, function, business capability served, usage); Dependency mapping (how applications interact and feed data to others); Cost attribution (direct licenses and indirect support/maintenance/infrastructure); Business value assessment (which processes rely on each asset); Technical health attributes (supported? built on outdated frameworks?)
- Metrics & Scoring: Business criticality, technical health, functional fit, and redundancy scores for each application. TIME quadrant assignment based on technical fit vs functional fit. Annual TCO per application for cost/benefit analysis. Decision matrix classification (Strategic / Essential / Legacy / Redundant / Excess)
- Visualizations: TIME quadrant chart plotting functional fit vs technical fit for each application. Portfolio dashboards showing count and cost by category. Application-to-Capability matrices revealing redundancy and gaps. Roadmap timelines for retire/migrate plans with scheduled decommissioning dates
1.2 Total Cost of Ownership (TCO) Analysis (TOGAF Phase F – Migration Planning & Governance)
Description: TCO analysis calculates the full lifecycle cost of IT systems to inform strategic decisions. A mature EA practice anchors decisions in TCO: it shapes portfolio rationalization, technology standardization, modernization roadmaps, and investment prioritization. TCO modeling must extend well beyond licensing or acquisition cost to include a structured lifecycle view:
| TCO Category | Components |
| Acquisition & Implementation | Software licensing/subscription, infrastructure provisioning, integration/customization, data migration, initial training |
| Ongoing Operations | Hosting (cloud or on-prem), support/maintenance contracts, monitoring/incident management, security operations, backup/DR |
| Change & Evolution | Enhancements/upgrades, regulatory compliance updates, integration expansion, scaling infrastructure |
| Indirect & Hidden Costs | Downtime impact, productivity loss from poor usability, vendor lock-in constraints, technical debt remediation |
| End-of-Life Costs | Decommissioning, data archiving, contract termination penalties, transition to replacement systems |
Importance: Without a disciplined TCO perspective, the technology landscape drifts toward inefficiency: redundant platforms proliferate, integration complexity compounds, and technical debt accumulates quietly. Embedding TCO into architecture shifts the conversation from “Can we implement this?” to “Should we operate this for the next 5–10 years?”. This elevation achieves three things: (1) aligns architecture with financial strategy by directly influencing operating margins and capital allocation, (2) strengthens governance credibility by turning architects into strategic advisors, and (3) enables rational portfolio evolution — rationalization, cloud migration, and platform consolidation become evidence-based.
A rigorous TCO model focuses on forward-looking lifecycle economics using multi-year projections (typically 5–10 years), separates fixed vs variable cost drivers, includes sensitivity analysis for growth scenarios, and incorporates risk-adjusted cost estimates. TCO requires cross-functional ownership — not just architecture, but Finance (validates cost models, discount rates, depreciation), Operations/ITSM (provides real operational cost data), Security & Risk (quantifies compliance and audit exposure), Procurement (evaluates contractual flexibility), and Business Units (assess productivity impact and adoption risk).
Typical Outcomes:
- Clear 5–10 year cost projections per system or architecture option
- Informed decisions on platform upgrades vs replacements, cloud vs on-prem hosting, and rationalization priorities
- Strengthened business cases linking architecture choices to operating margins and ROI
- Cross-functional scrutiny that reduces blind spots and shifts cost accountability from isolated ownership to collective responsibility
EA Tool Implementation:
- Data & Relationships: Cost elements per system mapped to the five TCO categories above, linked to applications/services and aggregated by business capability or project
- Metrics & Calculations: Annual and multi-year TCO (e.g., 5-year present value). Unit cost metrics (cost per user/transaction). Cost breakdown percentages (maintenance vs new development). ROI or cost-benefit scores when paired with value metrics. Sensitivity analyses (best-case/worst-case scenarios for variable costs)
- Visualizations: TCO dashboards showing each system’s total cost stack for side-by-side comparison. Cost heatmaps (business capability map color-coded by total IT cost). Waterfall charts breaking application cost into build/run/evolve components. “What-if” scenario models simulating cost impact of consolidation or cloud migration. Portfolio cost trend charts projecting spend under current vs target architecture
1.3 Technology & Platform Lifecycle Analysis (TOGAF Phase C – Technology Architecture)
Description: Analysis of infrastructure and platform technologies focusing on their lifecycle stage — current, mainstream, contained, retirement planned, end-of-life (EOL), or end-of-support (EOS). Applications running on outdated infrastructure create risks that pose significant financial, reputational, and regulatory consequences. Knowing which components are reaching their end of life/end of support — and when — is crucial in addressing risks before they become threats.
Importance: Proactive lifecycle management prevents unplanned outages and security incidents from unsupported technology. It supports standardization and convergence on approved platforms while avoiding proliferation of outdated tech. The approach follows a structured workflow: automate inventory building, connect application portfolios to the infrastructure layer, streamline by eliminating duplicates, enrich with up-to-date lifecycle data from reference catalogs, and accelerate obsolescence risk assessments using built-in reports.
Typical Outcomes:
- Identification of all technologies approaching EOL/EOS with action plans (upgrade, replace, retire)
- Prioritization of unaddressed obsolescence risks based on business criticality
- Surface risk to application owners for collaborative remediation decisions: accept or address
- Proactive reduction of obsolescence through technology upgrade roadmaps with progress tracking
EA Tool Implementation:
- Data & Relationships: Technology reference catalog with vendor-provided lifecycle dates (EOS, EOL). Mappings to dependent applications and infrastructure. Criticality tags for mission-critical systems. Link to reference catalogs that reduce manual lifecycle data maintenance
- Metrics: Days/months until support ends per component. Number of applications dependent on each technology (impact scope). Risk score combining criticality of apps, time since/until EOL, and availability of alternatives. Coverage of lifecycle data for accurate risk assessments
- Visualizations: Technology lifecycle heatmap (application vs underlying tech, color-coded by lifecycle status). Dashboard of upcoming EOL events on timeline. Obsolescence risk reports highlighting top high-risk components by impact. Risk mitigation roadmaps with milestone tracking
1.4 Architecture Roadmapping & Transition Analysis (TOGAF Phase E – Opportunities & Solutions)
Description: Development of EA roadmaps — phased plans moving from “as-is” to “to-be” architecture. This involves gap analysis (comparing baseline vs target to identify what’s missing), transition-state planning (defining intermediate architectures), and dependency sequencing to coordinate projects toward strategic goals.
Importance: The roadmap translates EA recommendations into actionable steps, providing a bridge between strategy and execution. It mitigates risk by ensuring transitions don’t disrupt operations and optimizes initiative sequencing (e.g., upgrade foundation systems before dependent ones). It serves as a communication tool aligning IT, business, and finance on investment timing.
Typical Outcomes:
- Gap analysis reports listing differences between current and target capabilities, yielding requirements for new initiatives
- Transition architectures — intermediate states describing what the architecture looks like after each set of projects completes
- Roadmap diagrams mapping initiatives over time with dependencies
- Benefits realization timelines showing when value from each change is expected
EA Tool Implementation:
- Data & Relationships: Baseline and target architecture models (catalogs of processes, applications, data, technologies). Mappings between baseline and target elements indicating changes. Project portfolio data with dependencies, time frames, and linked architecture components
- Metrics: Gap counts, transition risk levels, project prioritization scores (value vs effort), milestone dates for key capabilities
- Visualizations: Multi-year roadmap timelines with project bars and capability annotations. Transition-state diagrams highlighting new/changed elements. Dependency matrix or PERT charts. Gap analysis matricesmapping each gap to its resolving project
1.5 Risk, Compliance, and Security Impact Analysis (TOGAF Phase A & G – Architecture Vision & Governance)
Description: Assessing how well the architecture addresses enterprise risks and compliance requirements. Risks rarely sit neatly in silos, and a shared, organization-wide review is essential. EA provides a bridge between business and IT, ensuring interdependencies are recognized and gaps addressed.
EA helps address risks across four categories identified by The Essential Project:
| Risk Type | Examples | How EA Helps |
| Regulatory | Sarbanes-Oxley, industry rules | Maps processes and ensures compliance is supported by systems and data |
| Financial | Fraud, unauthorized trading | Provides visibility into processes and information supporting controls |
| Operational | System failures, outdated technology, DR gaps | Identifies dependencies, supports business continuity planning |
| Reputational | Data breaches, poor security | Ensures information security measures are embedded in processes |
Importance: Operational risks often look like “technology problems” but can threaten the entire business. Business continuity planning is essential — inadequate preparation for disaster recovery can be catastrophic. Major organizational changes (acquisitions, outsourcing, system migrations) expose hidden risks. A well-designed EA enables organizations to anticipate risks early, ensure compliance and transparency, map interdependencies, support resilience during major change, and reduce vulnerability to operational and reputational risks.
Typical Outcomes:
- Enterprise risk maps linking risk scenarios to business processes and systems
- Recommendations for risk mitigation projects (secondary data centers, authentication improvements, backup enhancements)
- Compliance coverage analysis — which systems are subject to which regulations and whether controls are in place
- Updated architecture principles or standards (e.g., “All critical systems must have DR with RTO < X”)
- Feeds into audit and security architecture reviews
EA Tool Implementation:
- Data & Relationships: Risk register with description, likelihood, impact, and mitigating controls linked to architecture components (business processes, applications, data, technology, vendors). Compliance requirements linked to processes/systems. BCP/DR information (RTOs, RPOs, recovery plans)
- Metrics: Risk score (likelihood × impact). Residual risk after current controls. Count of high-risk items (e.g., critical apps lacking backup). Compliance coverage percentage. Resilience metrics (RTO/RPO achieved vs required)
- Visualizations: Risk heatmap matrix plotting likelihood vs impact with color coding. Risk-to-asset mappingoverlaid on capability maps showing where major risks concentrate. Compliance dashboards showing status by system. Issue tracking views for open risk mitigations
2. Zachman-Inspired Analyses
Focused on ensuring completeness and consistency across Zachman’s 6×6 matrix — covering all enterprise aspects at multiple stakeholder perspectives. The Zachman Framework is an ontology (a structural model), not a methodology (a process model). It was created by John Zachman in the 1980s and refined to include six descriptive focuses: Data, Function, Network, People, Time, Motivation and perspectives corresponding to different roles: Planner, Owner, Designer, Builder, Subcontractor, Enterprise.
2.1 Architectural Completeness & Consistency Analysis
Description: Verifying that the enterprise’s architecture descriptions are complete across all Zachman perspectives and facets and consistent with each other. The Zachman Framework consists of 36 categories in a two-dimensional matrix with six rows (perspectives) and six columns (interrogatives). Each cell provides a specific viewpoint for a particular stakeholder. Seven rules ensure consistency and robustness, including: columns have no order (Rule 1), each column has a basic mode addressing its fundamental question (Rule 2), each cell is unique in content and focus (Rule 5), the composite of all cell models in one row constitutes a complete model from that row’s perspective (Rule 6), and the logic is recursive so it can be applied at different levels of granularity (Rule 7).
The columns address six enterprise questions: What (data, entities, relationships), How (processes, workflows, transformations), Where (locations, networks, connectivity), Who (people, roles, organizational units), When (schedules, events, time-based conditions), Why (goals, objectives, motivations).
Importance: This analysis ensures no critical aspect is overlooked. In large enterprises, complexity causes blind spots — a process with no accountable owner, a software system with unclear purpose, or data without governance. Zachman guards against these by requiring each intersection of perspective and aspect to be considered. Its primary use case includes strategic planning and alignment, ensuring IT initiatives directly align with business objectives by providing a clear structure for understanding relationships between various enterprise components.
Typical Outcomes:
- A Zachman coverage matrix highlighting filled vs empty cells with color-coded maturity of documentation
- Identification of missing artifacts (e.g., no formal business process model for a crucial function, no data dictionaries for key data)
- Traceability mappings linking strategy to capabilities to processes to systems, enabling impact analysis when business goals change
- A more complete EA repository that informs other analyses
EA Tool Implementation:
- Data & Relationships (mapped to Zachman rows):
Row 1 — Planner’s View (Scope): Enterprise Inventory Chart (data), Process Flow Diagrams (process), Location Chart (network), Organizational Chart (people), Event List (time), Goal List (motivation)
Row 2 — Owner’s View (Business Model): Entity Relationship Diagrams (data), Business Process Models in BPMN (process), Business Location Diagrams (network), Business Role Definitions (people), Business Event Diagrams (time), Business Rule Catalogs (motivation)
Row 3 — Designer’s View (System Model): Logical Data Models (data), System Process Models (process), System Network Diagrams (network), System User Roles (people), System Event Diagrams (time), System Rationale (motivation)
Row 4 — Builder’s View (Technology Model): Physical Data Models (data), Program Code (process), Physical Network Diagrams (network), Security Profiles (people), Job Schedules (time), Configuration Documentation (motivation)
Row 5 — Subcontractor’s View (Detailed Representations): Data Dictionaries (data), Detailed Code Specifications (process), Cable Schematics (network), User Manuals (people), Run Books (time), Test Plans (motivation)
Row 6 — Functioning Enterprise (Actual System): Operational Data (data), System Logs (process), Real-time Network Monitoring (network), User Activity Logs (people), Audit Trails (time), Performance Reports (motivation)
- Metrics: Coverage metrics for Zachman cells (percentage with documented artifacts). Consistency checks (e.g., each process ties to data and goals). Traceability count showing end-to-end links (goal to deployed system)
- Visualizations: 6×6 framework matrix dashboard color-coded by documentation maturity. Traceability chain diagrams from business goals to enabling technology. RACI matrices (Who vs What/How). CRUD matrices(Data vs Processes)
2.2 Perspective-Specific Analyses (What, How, Where, Who, When, Why)
Zachman encourages in-depth analysis of each fundamental aspect. The Zachman Framework matrix structure maps each combination clearly:
| Aspect / Perspective | Scope | Business Model | System Model | Technology Model | Detailed Representations | Functioning Enterprise |
| What (Data) | Objectives/Lists | Business Entities | Logical Data Models | Physical Data Models | Data Definitions/Schemas | Data Transactions/Records |
| How (Function) | Business Processes | Activities/Workflows | Application Architecture | System Architectures | Programs/Configurations | Functioning Processes |
| Where (Network) | Locations/Facilities | Business Locations | Distributed Systems | Network Configurations | Locations of Components | Operational Sites |
| Who (People) | Organizational Units | Actors/Roles | User Interfaces/Access | Identity Management | Security and Access Control | Active Users/Operators |
| When (Time) | Events/Cycles | Business Events | Processing Structures/Timing | Scheduling/Timing Specs | Timing Definitions | Actual/Event Logs |
| Why (Motivation) | Business Goals/Objectives | Business Rules/Policies | Rule Models | Implementation Strategies | Detailed Rules | Performance Metrics |
Source: Ardoq’s definitive guide to the Zachman Framework
The following summarizes each aspect-focused analysis:
- Data Architecture Analysis (What): Examines enterprise data entities and information flows for consistency, integrity, and reuse. Identifies duplicate or inconsistent data across systems (e.g., multiple customer databases requiring unification) and ensures key business concepts are defined in an Enterprise Data Model. Outcomes:Single authoritative view of core business data, elimination of redundant data stores, improved data quality. EA Tool: Data relationship models across processes and apps; metrics like data duplication counts and quality scores; conceptual/logical data models and data flow diagrams.
- Business Process & Functional Analysis (How): Detailed analysis of business processes and functions. Ensures processes are optimized, properly support capabilities and policies, and have clear owners (Who) and supporting applications (What). Outcomes: Streamlined processes, identification of bottlenecks and automation opportunities, clarity on how each process ties to value streams. EA Tool: BPMN-based process documentation; metrics like cycle time and handoff count; process flowcharts and capability-to-process matrices.
- Infrastructure & Location Analysis (Where): Reviews geographic and network architecture — data centers, cloud regions, network topology — aligned with business geography. Checks for fragmented infrastructure or single-location concentration risking resilience. Outcomes: Infrastructure consolidation decisions, network optimization, improved disaster recovery coverage. EA Tool: Locations and network links; metrics like system latency per site; maps and network diagrams showing deployments and connections.
- Organization & Role Analysis (Who): Ensures each system and process has clear ownership and that organizational design supports architecture execution. Reveals misalignments where multiple teams own overlapping systems or where critical roles are under-resourced. Outcomes: RACI matrices, governance adjustments, skill gap identification. EA Tool: Org units, roles, and their relationships to capabilities/processes; org charts, stakeholder maps, and capability vs capacity heatmaps.
- Temporal Events & Scheduling Analysis (When): Focuses on time-sensitive aspects — business cycles, events, timing requirements, and peak-season scaling needs. Ensures architecture supports critical processes and release schedules. Outcomes: Alignment of IT release schedules with business events, identification of time-related risks, improved event-driven designs. EA Tool: Business event calendars and IT job schedules; processing time vs window metrics; timelines of business cycles overlaid with system capacity.
- Strategy & Motivation Analysis (Why): Examines motivation behind architecture — business goals, strategies, drivers, rules — and ensures architectural choices are traceable to business strategy. Outcomes: Every project justified by a strategic objective, removal of investments not mapped to goals, cohesive business rules. EA Tool:Strategic goals, objectives, business principles mapped to capabilities and applications; coverage metrics (% objectives with initiatives); strategy maps and business motivation models.
3. Capability-Centric (Business-First) Analyses
Emphasizing Business Architecture: mapping and evaluating business capabilities to ensure IT is driven by business strategy. Aligned with practices from the Business Architecture Guild and capability-based planning.
3.1 Business Capability Mapping & Heatmap Analysis
Description: A Business Capability Map models the services a business offers or requires. These capabilities are modeled in the Business Conceptual layer and represent what the business does (or needs to do) to fulfil its objectives and responsibilities. Capabilities are relatively stable because they define the “what” which rarely changes, whereas processes constantly evolve as the “how” changes with technology advancement and customer demand.
Organizations define Strategic Goals and Objectives and map these to the Business Capabilities that enable them, which more clearly outlines where efforts should be focused. Business capabilities belong to a Business Domain, are governed by Business Principles, and are realized by business processes. The capability model serves as an anchor to highlight where duplication of applications exists and the potential for rationalization, which is often of interest to senior management.
Importance: Capability mapping provides a high-level overview of the business, allowing one to take a step back and focus on what the key elements are, avoiding getting bogged down in details of “how” things happen. Once key capabilities are identified — especially those that differentiate the business — this information ensures focus on areas of importance in defining new projects or ensuring business-as-usual delivers appropriately. Heatmapping capabilities by factors like strategic importance, current performance, or application count surfaces misalignments and directs investment.
Typical Outcomes:
- Identification of differentiating capabilities warranting strategic investment
- Detection of application duplication per capability, revealing consolidation opportunities
- Capability gaps where required capabilities have no supporting people, processes, or systems
- Capability-based initiative recommendations (e.g., “Improve Analytics capability”)
EA Tool Implementation:
- Data & Relationships: Capability hierarchy (Level 1 ? Level 2 decomposition) grouped by domains. Mappings to: Processes that execute capabilities, Applications/Services supporting them, Data entities handled, Organization units, and Strategic goals supported. Parent/child capability relationships with positional attributes (Manage = top layer, Back = back-office, Front = domain-specific). Investment/project data linked to capabilities
- Metrics & Scoring: Capability criticality score (strategic value or revenue impact). Satisfaction/performance score (from KPIs or stakeholder surveys). Capability gap score = f(importance minus current performance). Application count per capability (flags redundancy or complexity). Investment level (annual spend on IT for each capability)
- Visualizations: Business Capability Map (grid of capability boxes organized by domain, color-coded by a chosen metric). Capability-to-Application matrix (reveals if multiple apps support the same capability or if a critical capability has no robust application). Value vs. Support quadrant (plot capabilities by business value vs current IT support quality). Strategic alignment map (trace from objectives to capabilities to initiatives)
3.2 Capability Dependency and Impact Analysis
Description: Models the interdependencies between business capabilities and their enabling elements to understand impact propagation. Value streams and end-to-end processes help derive these dependency links — e.g., Order Fulfillment depends on Inventory Management. By linking capabilities to value streams or customer journeys, EAs assess how capability gaps might impact end-to-end outcomes.
Importance: Large enterprises get in trouble when changing one part of the business without understanding upstream/downstream effects. Dependency analysis ensures improvements are sequenced correctly and that isolated changes don’t produce unintended consequences. It identifies capabilities that are broadly supporting many others (critical hubs) — these should get priority for stabilization.
Typical Outcomes:
- Capability dependency diagrams or matrices highlighting key linkages and critical hubs
- Identification of domino effects — a low-maturity input capability causing multiple downstream capabilities to underperform
- Recommendations for grouping related capability improvements into programs
- Fragility identification — single-threaded capabilities with no redundancy requiring contingency plans
EA Tool Implementation:
- Data & Relationships: Capability dependency maps derived from value streams and process flows. Capabilities linked to processes, applications, and data. IT infrastructure dependencies (application/data source to capabilities)
- Metrics: Centrality measures (number of other capabilities depending on a given one). Impact score per capability = total weighted importance of dependent capabilities. Coupling index (average dependencies per capability). Resilience score (multiple delivery paths vs single-threaded)
- Visualizations: Capability dependency network diagrams (nodes sized by how many others depend on them). Value stream maps annotating where capability maturity or system issues exist. Impact analysis tables (e.g., “If Capability X changes, here are the affected Capabilities, owners, and systems”)
3.3 Capability Maturity Assessment
Description: Evaluates how well each business capability is executed, typically using a defined maturity model. Organizations recognized that effectiveness of a Business Architecture approach is measured by how effectively the developed Business Capabilities support the business or mission goals. Maturity Models provide guidance for business capabilities maturing, growth, and practice improvements based on industry standards and theoretical soundness.
The TOGAF-associated Architecture Capability Maturity Model (ACMM), developed by the US Department of Commerce, considers nine architecture elements:
- Architecture Process — how well architectural operations are defined, documented, and followed
- Architecture Development — methods and tools used for EA development
- Business Linkage — alignment between IT investments and core organizational objectives
- Senior Management Involvement — how effectively leadership aids EA development
- Operating Unit Participation — degree to which business units engage in EA
- Architecture Communication — how well EA processes are communicated within the organization
- IT Security — how well cybersecurity concerns are met within the EA
- Architecture Governance — mechanisms and structures to oversee and guide EA efforts
- IT Investment and Acquisition Strategy — spending on IT systems, both current and new
The ACMM defines six stages (0–5):
| Stage | Name | Description |
| 0 | None | Complete lack of EA development |
| 1 | Initial | Identified as worth developing but no processes defined; ad-hoc only |
| 2 | Under Development | Processes, tools, and methods being issued; work not yet standardized |
| 3 | Defined | Active development with standardized, content-complete EA practices, aligned with business objectives |
| 4 | Managed | Performance metrics in focus; searching for additional refinement and ROI improvement |
| 5 | Measured | Optimized fully; rich data analysis drives iterative loops to prevent stagnation |
Importance: Knowing capability maturity helps prioritize where improvements are needed: a critical capability at low maturity represents a competitive weakness. Maturity assessments incorporate process, people, and tools aspects, ensuring modernization efforts address organizational gaps alongside technology. Organizations should provide guidance and definitions necessary for successful integration of information and services across the business by developing Goal-enabling Business Capabilities.
Typical Outcomes:
- Capability maturity matrix showing current level for each capability with findings and rationale
- Identification of transformation candidates (low maturity, high strategic importance)
- Lists of improvement initiatives per capability (e.g., “Bring Capability X from Level 2 to 3 by standardizing procedures and implementing a new system”)
- Baseline for measuring progress over time
EA Tool Implementation:
- Data & Relationships: Maturity assessment data per capability — scores across dimensions (process, information, tools, metrics, organization) aggregating to an overall level. Link maturity levels to evidence/artifacts. Connect capabilities to improvement initiatives
- Metrics: Maturity level (numerical or categorical). Maturity gap (desired level minus current). Benchmark comparisons if external data available. Progress metrics tracked over annual reassessments. Business outcome metrics aligned to each level
- Visualizations: Capability maturity heatmap (capability map colored by maturity level). Maturity assessment tables (current vs target with improvement actions). Trend charts for maturity tracking. Bubble charts plotting capabilities by importance (x) vs maturity (y), with bubble size = investment — high-importance/low-maturity quadrant represents critical priorities
3.4 Value/Benefit and Investment Prioritization Analysis
Description: A portfolio-level analysis evaluating and ranking projects or initiatives based on expected business value, strategic alignment, and resource constraints. EA ensures finite investment capacity is allocated to the highest-value opportunities using criteria such as strategic fit, benefit/cost ratio, risk, and urgency.
Importance: In a large enterprise, hundreds of competing projects require a rational, transparent prioritization process aligned with strategy. This analysis ensures architectural dependencies and prerequisites inform the investment sequence (e.g., delaying a customer-facing project until a foundational data project completes). It also prevents waste on low-value projects.
Typical Outcomes:
- Ranked initiative list with clear rationale for each ranking
- Portfolio scenarios (e.g., if budget is X, fund projects 1–10; if budget is X+20%, include 11–13)
- Identification of “quick wins” (low cost, high benefit) vs “strategic bets” (high cost, transformative)
- Projects mapped by capability to ensure each strategic capability has some investment
- Executive decisions on investment allocation for the planning horizon
EA Tool Implementation:
- Data & Relationships: Initiative catalog (description, associated goals/capabilities, estimated cost, expected benefits, risk level, time to deliver, dependencies). Links to architecture components changed and constraints (budget, resources, compliance deadlines)
- Metrics & Scoring: Business value score (composite of revenue impact, cost savings, strategic alignment, compliance necessity, opportunity cost). Cost and duration. ROI/NPV or benefit-cost ratio. Risk score. Scoring model documented for transparency
- Visualizations: Benefit vs Effort quadrant chart (top-left = quick wins). Prioritization matrix table sorted by score with contributing criteria. Portfolio map by capability showing investment concentration. Treemap charts(size = cost, color = benefit score or strategic theme). Roadmap alignment charts overlaying selected initiatives on timeline
4. Framework-Agnostic / Modern EA Practice Analyses
Contemporary analyses reflecting current priorities like cloud adoption, technical debt management, vendor risk, and resilience — not tied to a single traditional framework but cross-cutting multiple domains.
4.1 Technical Debt Analysis
Description: Technical debt refers to the shortcuts, suboptimal solutions, or deferred maintenance in software and IT systems that accumulate over time. It often arises when organizations prioritize speed over quality, choosing quick fixes that require future rework. While it can accelerate short-term delivery, it increases long-term costs, complexity, and risks.
Technical debt can be measured through several approaches:
- Financial estimation: Organizations estimate the cost of repaying technical debt based on additional time, resources, and lost productivity
- Code analysis tools: Automated tools such as SonarQube and CAST highlight code quality issues and quantify debt in terms of effort required to fix them
- Qualitative assessment: Surveys and expert evaluations gauge the impact on business operations and IT performance
Financial estimations are usually the most efficient approach to convince management to address technical debt.
Two important sources of technical debt are identified: (1) Change in company operating strategy — frequent changes to operating models create a chaotic environment that can produce large overhang of technical debt, and (2) Mergers and acquisitions — leaving huge amounts of technical debt from stitching together working systems, creating brittle and expensive integrations or poor use of licensed software.
Importance: Technical debt significantly impacts day-to-day operations by increasing system inefficiencies, slowing development cycles, and reducing business agility. It often results in security vulnerabilities, making systems more prone to breaches and compliance risks. Employee productivity suffers as developers spend more time troubleshooting rather than focusing on innovation. Unresolved technical debt can lead to competitive disadvantage as companies struggle to adapt to market changes.
EAs play a crucial role by establishing governance, business, and technical frameworks while focusing on solving pressing business problems. They can advocate for technical debt assessments during architectural reviews and ensure decision-makers understand trade-offs, leveraging financial estimations. To increase success odds, technical debt reduction must be aligned with business objectives. EAs should communicate the business risks and provide actionable roadmaps, incorporating debt into enterprise architecture roadmaps and digital transformation strategies.
Typical Outcomes:
- Technical debt register listing major debt items with estimated cost and impact
- Prioritization of refactoring or modernization projects to tackle high-ROI debt repayments
- Integration of debt reduction into development lifecycles through capability-based roadmaps using value streams
- Leadership allocation of dedicated budget for technical debt reduction efforts
EA Tool Implementation:
- Data & Relationships: Technical debt items linked to specific applications or components, with attributes: description, debt type (code, architectural, infrastructure), impact, estimated remediation effort, owner/team. Data from code analysis tools (SonarQube, CAST), architecture reviews, and operations reports. Links to risk entries and strategy
- Metrics: Total debt count or magnitude (sum of estimated person-days). Debt by category (UI, backend, infrastructure). Debt by system (individual scores). Trend over time. Debt criticality (minor code smells vs security vulnerabilities from outdated OS). “Cost to fix vs cost to rebuild” ratios
- Visualizations: Technical debt heatmap (portfolio map colored by debt level). Bubble charts (x = debt score, y = business value, highlighting high-value systems saddled with debt). Debt timeline (total outstanding debt over time showing progress). Pareto charts for debt causes
4.2 Cloud Suitability & Migration Analysis
Description: Evaluating applications or workloads for their suitability for cloud migration or modernization. The TIME model directly supports this: applications in the Migrate quadrant are often good candidates for cloud migration, while those in the Tolerate quadrant might be better suited to remain on-premises. The analysis considers three key focus areas: Business (criticality, time to market/agility, future enhancements, growth expectations), Data (sensitivity, integrity, regulatory restrictions, transfer costs), and Technology (scalability needs, availability requirements, legacy dependencies, platform compatibility).
Key parameters for assessment include: necessity/motivation, evaluation of compliance-cost-capability factors, cloud features, and security. Applications already identified for retirement should not be migrated; business criticality and time-to-market urgency drive the evaluation. Data sensitivity (financial institutions maintaining data on-premises due to legal/regulatory requirements) and transfer volumes impact cloud cost and feasibility. Technology assessment covers scalability, availability needs, legacy dependencies, and platform constraints.
For workloads migrating to Azure, Microsoft’s Cloud Adoption Framework recommends assessing workload architecture, application code, and databases, with structured assessment of components, dependencies, security configurations, and compliance requirements.
Importance: Cloud adoption is a key enabler for scalability, faster deployment, and cost variability. However, migrating without analysis leads to failures or cost overruns. A structured suitability analysis prevents missteps by prioritizing migrations with the best value while flagging applications that should not move. Business criticality is given utmost priority, followed by data considerations, then technology — any technologically complex applications can be migrated post-successful modernization.
Typical Outcomes:
- Cloud disposition recommendation per application (using 6R strategies: rehost, replatform, refactor, rebuild, replace, retire)
- Migration wave groupings minimizing dependency issues (organizing workloads into waves that reduce broken dependencies)
- Risk and prerequisites identified for each wave (upgrade requirements, data confidentiality concerns, compatibility remediations)
- Cost projections comparing infrastructure costs (on-prem vs cloud)
EA Tool Implementation:
- Data & Relationships: Application inventory with cloud-relevant attributes: technology stack, dependencies (tight coupling to hardware/network?), data sensitivity and regulatory constraints, usage patterns (steady vs spiky demand), licensing/vendor cloud support, and constraints. Complete documentation of all security and identity configurations including service accounts, encryption methods, and firewall rules. Internal and external dependency mapping
- Metrics & Scoring: Cloud readiness score combining: business value gain from cloud features, technical readiness (complexity to move), cost impact (will cloud reduce TCO?), and risk (compliance/security concerns). Classification by migration strategy. Percentage of portfolio in cloud vs on-prem to track progress
- Visualizations: Cloud readiness matrix (ease of migration vs benefit, bubble size = cost/risk). Portfolio segmentation charts (apps by readiness category). Migration roadmap timeline showing cumulative percentage of portfolio in cloud by date. Cost projection charts comparing on-prem vs cloud infrastructure costs
4.3 Vendor and Dependency Concentration Analysis
Description: An examination of vendors and external dependencies in the IT landscape assessing risks of over-reliance and opportunities for diversification. Vendor lock-in occurs when a customer becomes overly dependent on a particular vendor for products and services, making it costly or technically difficult to switch to an alternative. While leveraging vendor offerings can speed up development and reduce initial costs, the long-term implications of lock-in can severely affect agility, innovation, and cost control.
Architectural decisions that increase lock-in likelihood include: using proprietary APIs, lack of abstraction layers, data format dependencies, tight coupling to vendor services, and non-standard infrastructure. The risks span: loss of negotiation power, cost escalation (vendors raise prices knowing switching costs deter customers), innovation stagnation, service disruptions, and regulatory/compliance issues.
Importance: In multi-cloud strategies, concentration analysis reveals whether the enterprise is truly avoiding lock-in or unintentionally congregating on one provider. Cloud platforms (AWS, Azure, GCP) vary in implementation and availability of managed services, and as organizations adopt platform-specific services (AWS Lambda, Azure Cosmos DB, Google BigQuery), reliance increases. Migration between cloud providers often requires rewriting code, adjusting configurations, and ensuring compatibility.
Mitigation strategies identified in the literature include: abstraction layers, open standards and protocols (REST, GraphQL, OAuth), multi-cloud and hybrid architectures, containerization (Docker, Kubernetes), data portability (JSON, CSV, Parquet formats), modular/microservices design, Infrastructure as Code (Terraform, Pulumi, Ansible), and SLA/exit strategy negotiation.
Tradeoff consideration: Avoiding lock-in entirely may not always be practical or cost-effective. Vendor services are often optimized for performance, scalability, and ease of use, and designing for total portability can increase initial development effort and complexity. Businesses must weigh time-to-market vs portability, cost vs flexibility, and innovation vs stability. A pragmatic approach involves identifying core systems requiring portability and focusing abstraction efforts there, while allowing less critical workloads to use vendor-optimized services.
Typical Outcomes:
- Vendor portfolio report showing each major vendor and dependent systems/capabilities
- Risk mitigation actions for high-concentration vendors (secondary solutions, contingency plans)
- Consolidation opportunities (standardize on one product where multiple competing products exist)
- Improved vendor negotiation strategies leveraging EA’s cross-silo visibility
- Technology diversification recommendations (e.g., using Kubernetes to avoid single-cloud platform lock-in)
EA Tool Implementation:
- Data & Relationships: Vendor catalog linked to applications and technologies. Contracts/SLAs from vendor management systems. Criticality attributes (mission-critical service provider?). Spend per vendor from procurement
- Metrics: Vendor concentration percentage (share of total systems or spend by top N vendors). Single vendor points of failure count. Diversity index. Contract expiration timeline
- Visualizations: Vendor dependency matrix (vendors vs applications/capabilities). Pie/bar charts for spend by vendor and count of systems by vendor. Risk heatmap for vendors (ranking by financial stability, single-sourcing risk). Contract renewal timeline aligned with roadmap. Network graph where vendor nodes connect to all dependent applications
4.4 Application Maturity Assessment
Description: An evaluation of individual application health using dimensions such as business value and technical fitness. Per a framework used at University College Dublin’s IT Services (retrieved during research), business value assessment typically considers strategic alignment, operational needs, usability, and value for money, while technical fit examines architecture alignment, in-house skills availability, maintainability, security posture, and vendor viability. Applications are plotted on a Business Value vs Technical Fit quadrant to determine disposition — invest in high-value/high-fit, modernize high-value/low-fit, tolerate low-value/high-fit, and eliminate low-value/low-fit (aligning closely with the TIME model discussed in Section 1.1).
Importance: Application maturity differs from capability maturity in that it focuses on the technical and operational health of individual applications rather than business process quality. It provides input to rationalization by identifying which specific apps are aging or underperforming, and feeds into lifecycle and technical debt analyses.
Typical Outcomes:
- Per-application maturity profiles with scores across multiple dimensions
- Identification of applications requiring modernization, investment, or retirement based on combined business and technical assessment
- Data for TIME quadrant placement and rationalization decisions
EA Tool Implementation:
- Data & Relationships: Application attributes for scoring dimensions (business value components: strategic alignment, operational criticality, user satisfaction; technical fit components: architecture alignment, maintainability, security posture, vendor viability). Survey/workshop outputs stored per application
- Metrics: Business value score (composite). Technical fit score (composite). Combined maturity score. Gap between current and target maturity per application
- Visualizations: Business Value vs Technical Fit quadrant (similar to TIME). Application maturity scorecards. Portfolio bar charts grouping applications by maturity level
4.5 Resilience and Disaster Recovery Analysis
Description: Analysis of the resiliency of critical business services: mapping failure scenarios, ensuring architectures include redundancy, failover, and recovery capabilities. As identified by The Essential Project, business continuity planning is essential, and inadequate preparation for disaster recovery can be catastrophic. Information security gaps can escalate quickly into reputational risks. EA’s role is to map processes, systems, and their dependencies to anticipate and mitigate vulnerabilities.
Importance: Major organizational changes — acquisitions, outsourcing, system migrations — expose hidden risks. Even zero-day vulnerabilities (such as the MOVEit transfer breach cited in The Essential Project article) highlight risks in relying on third-party providers; EA should flag the sensitivity of exposing critical data externally.
Typical Outcomes:
- Identification of single points of failure with recommendations for high-availability solutions
- DR plans aligned with business continuity requirements (RTO/RPO)
- Dependency diagrams annotated with redundancy provisions (or lack thereof)
EA Tool Implementation:
- Data & Relationships: Dependencies from other analyses, continuity provisions per component (secondary site? automated failover?), RTO/RPO targets vs achieved
- Metrics: Components lacking backup or failover. RTO/RPO compliance percentage. Resilience score per critical service
- Visualizations: Dependency diagrams with redundancy annotations. Heatmaps on architecture diagrams highlighting components with no backup
Consolidated Priority Ranking of EA Analyses
The following ranking reflects the sequence an EA team in a large, complex enterprise should follow to deliver measurable value, considering executive relevance, decision leverage, data readiness, and dependencies between analyses:
| Priority | Analysis | Category | Rationale |
| 1 | Application Portfolio Rationalization | TOGAF | Biggest immediate ROI — eliminates redundant systems and maintenance waste (up to 20% maintenance savings per Gartner estimate). Establishes the application inventory that is the foundation for nearly every other analysis. Quick wins in cost reduction and security posture improvement (retiring legacy apps reduces attack surface). |
| 2 | Business Capability Mapping & Heatmapping | Capability | Strategic alignment driver — ensures IT targets high-value business areas. Highlights capability gaps and application duplication, guiding rationalization and investment with clear business context. Engages business leadership by speaking their language, building support for the EA program. |
| 3 | Total Cost of Ownership (TCO) Analysis | TOGAF | Anchors all decisions in financial insight — prevents “cheap-to-start, expensive-to-own” decisions by requiring 5–10 year lifecycle cost modeling. Essential for building credible business cases; strengthens EA credibility with finance stakeholders. Shapes rationalization, cloud, and portfolio decisions with hard data. |
| 4 | Technology/Platform Lifecycle (EOL/EOS) Analysis | TOGAF | Risk mitigation and continuity — identifies unsupported tech before it breaks. High executive visibility because tech failures halt business. Easy to start with available vendor data (lifecycle notices) and enables proactive scheduling of upgrades/replacements. Reduces firefighting by frontloading lifecycle management. |
| 5 | Cloud Suitability & Migration Analysis | Modern | Modernization and agility — guides efficient cloud adoption, preventing lift-and-shift pitfalls. Focuses first on apps with high cloud payoff (scalability, cost optimization). Business criticality drives prioritization, followed by data and technology factors. Often aligned with CIO mandates; EA ensures rational execution. |
| 6 | Risk/Resilience/Compliance Impact Analysis | TOGAF | Protects business value — major risks (breaches, outages, compliance violations) can exceed IT project costs. EA provides an integrated risk lens by addressing regulatory, financial, operational, and reputational risks simultaneously. Many mitigations can be tackled alongside rationalization (e.g., retiring a risky legacy system). |
| 7 | Technical Debt Analysis | Modern | Preserves long-term agility — unmanaged debt slows future projects and drives up costs. Prioritizing after immediate cost-out (rationalization) means subsequent projects (cloud migrations, etc.) don’t just port existing inefficiencies. Financial estimation approach is most effective for gaining management support. Data becomes clearer once portfolio is mapped. |
| 8 | Investment Portfolio Prioritization | Capability | Maximizes strategic impact per dollar — once cost baseline and urgent fixes are established, systematically ranking initiatives ensures the right projects move forward. Builds on capability mapping and TCO data. Executives expect EA to facilitate investment decisions for digital transformation. |
| 9 | Capability Dependency & Impact Analysis | Capability | Informs sequencing and avoids silos — ensures improvements are implemented in the right order and scope by understanding business interdependencies. Becomes critical as the transformation roadmap is defined; leverages capability maps, adding network insight to avoid unintended consequences. |
| 10 | Vendor & Dependency Concentration Analysis | Modern | Prevents lock-in and supply risks — avoids over-concentration leading to cost escalation or strategic vulnerability. Often timed with vendor contract renewal cycles. Leverages data from rationalization (which apps use which vendors). Pragmatic tradeoffs between portability and efficiency must be weighed. |
| 11 | Architecture Roadmapping & Transition Planning | TOGAF | Coordinates execution of all prior analyses — ties together rationalization, cloud moves, risk mitigations. Ranked here because it naturally occurs once target decisions are made (from higher-ranked analyses). Provides the “how and when” blueprint once “what to do” is decided. |
| 12 | Capability Maturity Assessment | Capability | Deepens improvement focus — comes after foundational capability mapping and initial quick wins. Maturity assessments require surveys, workshops, and engaged stakeholders; ACMM provides a structured approach across 9 elements with 6 stages (0–5). Highly useful for continuous improvement and benchmarking. |
| 13 | Application Maturity Assessment | Modern | Feeds into rationalization and lifecycle — provides per-application health scores. Can be initiated alongside rationalization using the same data collection (business value and technical fit dimensions). Lower separate priority because its outputs fold directly into rationalization decisions. |
| 14 | Zachman Completeness & Alignment Analysis | Zachman | Foundational rigor for EA practice maturity — important for ensuring no blind spots across the 36-cell matrix, but ranked lower for immediate business impact. It is more of an internal EA quality check. As the EA function matures and documentation becomes comprehensive, this analysis yields progressively more value. |
| 15 | Perspective-Specific Analyses (Zachman) | Zachman | Targeted, ongoing disciplines — conducted on an as-needed basis when a particular aspect is a known pain point (e.g., data inconsistencies, organizational ambiguities). As the EA practice scales, different team members continuously refine data architecture, process architecture, etc., feeding findings into higher-priority initiatives. |
| 16 | Resilience & Disaster Recovery Analysis | Modern | Operations-critical but specialized — essential in regulated or high-availability environments (banking, healthcare). For most enterprises, the core risk analysis (#6) covers initial resilience concerns, with this deeper analysis applied to the most critical systems identified therein. |
Rationale Summary
The sequence begins with analyses delivering tangible value quickly or averting imminent pain:
- Ranks 1–3 (Rationalization, Capability Mapping, TCO): Produce the inventory foundation, strategic alignment, and financial discipline needed for all subsequent work. They generate quick wins in cost savings and executive engagement.
- Ranks 4–6 (Lifecycle, Cloud, Risk): Address the technical and operational risks that can derail transformation. They require the inventory and cost data from Ranks 1–3 and ensure that modernization plans are both safe and justified.
- Ranks 7–8 (Technical Debt, Investment Prioritization): Focus on optimization and resource allocation once the baseline is understood and immediate risks are managed. They ensure future work is efficient and strategically aligned.
- Ranks 9–11 (Dependency, Vendor, Roadmapping): Ensure execution robustness — correct sequencing, supply chain resilience, and coordinated delivery of the transformation agenda.
- Ranks 12–16 (Maturity, Zachman, Resilience): Provide ongoing refinement and EA practice sustainability — deepening organizational insight, ensuring architectural completeness, and hardening critical services.
Each enterprise should adjust this ordering based on specific pain points. A heavily regulated firm should elevate compliance and risk analysis; a technology product company might elevate technical debt. However, this ranking represents a robust starting point for a large legacy-rich, multi-cloud enterprise, enabling the EA team to deliver early value while building momentum for the comprehensive transformation analyses that follow.