
The permitting system is not the result of bad design. It is the result of the environment it was built in.
At the time it took shape, work was manual, information lived in documents, and knowledge lived in people. Coordination across disciplines did not happen simultaneously because it couldn’t. It had to move in sequence, with each group evaluating the project in turn and passing it forward. That structure was not arbitrary. It was the only practical way to manage complex projects given the tools available at the time.
That structure still exists today, but the environment around it has changed.
Developers now operate in systems that allow them to model, test, and refine projects before they commit significant time and capital. They can evaluate constraints early, iterate quickly, and make informed decisions before advancing. The permitting system, by contrast, has not evolved in the same way. Agencies have adopted software, but largely to track work rather than perform it. Systems show where an application sits, who is reviewing it, and what step comes next, but they do not meaningfully assist in evaluating the project itself.
This creates a fundamental mismatch. There is a difference between a system that provides workflow visibility and one that provides adjudicative support. One tells you where a project is in the process. The other helps determine what should happen to that project. Most permitting systems today are designed for the former.
That gap is where much of the perceived friction originates.
Developers and agencies are not simply moving at different speeds. They are operating within fundamentally different types of systems, and that difference shapes how work gets done.
On the development side, software is used to perform work. It allows teams to evaluate constraints, test different configurations, and understand potential issues before committing resources. A site is not just selected and advanced. It is modeled, adjusted, and pressure-tested against known conditions.
On the agency side, much of that same work is still performed manually. Staff review documents, interpret maps, and draw on their own experience to determine how different constraints apply and interact. Even when digital tools are involved, they tend to support visibility rather than evaluation. They show where a project is in the process, but they do not materially change how the work itself is carried out.
This distinction is not subtle. It directly affects how information is introduced, how consistently it is applied, and how quickly it can be understood.
As discussed in The Burden We Don’t See: What Agencies Actually Face in Permitting, the responsibility to assemble and interpret information sits largely with individual staff. That means outcomes are influenced by experience, familiarity with local conditions, and the ability to connect information across sources that were never designed to work together. Two reviewers looking at the same project can reasonably arrive at different conclusions depending on how they interpret the available information.
A significant portion of permitting work is not decision-making. It is determining what applies and what needs to be evaluated.
That variability is not a personnel issue. It is a system issue.
It also creates a situation where knowledge does not scale. Each project effectively requires the same reconstruction of understanding—what applies, what conflicts exist, what needs to be addressed—because that understanding is not embedded in the system itself.
From the outside, this often gets interpreted as delay or inefficiency. In reality, it is the result of a system that relies on people to do the work that the system does not.
Permitting is structured as a sequential review, where each discipline evaluates the project in turn. That structure determines not only how work is organized, but how information is introduced and how risk becomes visible.
In a sequential system, each discipline evaluates the project based on the information available at that point in time. That information is incomplete by definition, because other disciplines have not yet contributed their perspective. As a result, early evaluations are provisional, even if they appear definitive.
This is where the pattern of progressive risk discovery begins.
A project may appear consistent with land use planning at an early stage, only to encounter environmental constraints later. Addressing those constraints may require changes that affect engineering assumptions, which then introduce new considerations related to access, infrastructure, or existing rights. Each step adds information that was not previously available, and that information has the potential to invalidate prior work.
This is not an edge case. It is a normal outcome of the structure.
A sequential review structure does not merely take time. It produces a certain kind of time—one in which certainty increases slowly, and often only after significant effort has already been invested, as explored in Why Permitting on Federal Land Is Structurally Complex.
Federal environmental review requires coordination across multiple resource areas and agencies, each with distinct statutory responsibilities.
— Council on Environmental Quality
Federal frameworks such as the National Environmental Policy Act reinforce this structure by requiring input from multiple disciplines and agencies. Historically, this coordination had to occur step by step because there was no practical way to integrate all of that information simultaneously.
The result is a system where delay is not simply a matter of how long each step takes. It is a function of how many times the process must loop back on itself as new information emerges.
This pattern is visible in real projects tracked through the Federal Permitting Dashboard, where timelines extend across multiple years and require coordination across numerous agencies.
Most efforts to reform permitting focus on improving the existing process. The assumption is that if individual steps can be made faster or more efficient, the overall timeline will improve.
That assumption overlooks how the system actually generates delay.
If risk continues to be discovered sequentially, then even a faster process will still encounter the same points of friction. Conflicts will still emerge after prior work has been completed. Adjustments will still require revisiting earlier decisions. The sequence of evaluation ensures that information arrives in stages, and that staging is what drives iteration.
Speeding up individual steps reduces the time between those iterations. It does not reduce the number of iterations themselves.
The problem is not how fast the process moves. It is when the process learns what matters.
In some cases, it can even amplify the problem. Moving faster through early stages can create a stronger sense of momentum around a project that has not yet been fully evaluated. When conflicts emerge later, the cost of adjustment is higher because more work has been built on incomplete assumptions.
This is why efforts that focus solely on acceleration tend to produce limited results. They operate within a structure that is designed to surface information late.
The issue is not that the process is too slow in isolation. It is that the process introduces critical information at a point where it is more difficult and more costly to respond.
The core issue in permitting is not duration. It is timing.
Under the current structure, meaningful evaluation is deferred. Key constraints are identified only after a project has entered the review process and begun to take shape. By that point, assumptions have been made, resources have been allocated, and expectations have been set.
When new information emerges, it does not simply inform the project. It disrupts it.
Projects should not enter the process to discover risk. They should enter the process having already understood it.
A different approach would shift that evaluation earlier, before the project becomes dependent on assumptions that have not yet been tested. This does not require adding more steps to the process. It requires changing where evaluation occurs relative to project development.
Instead of using the permitting process to discover constraints, the system would be used to understand them before formal submission. This allows projects to be shaped with those constraints in mind, rather than adjusted after the fact.
A modern permitting system would evaluate key constraints in parallel rather than in sequence, but that shift needs to be understood in practical terms.
Parallel evaluation does not mean that every discipline independently reviews the project at the same time without coordination. It means that the information those disciplines rely on is made available in a way that allows their interactions to be understood early.
Land use designations, environmental constraints, existing rights, and resource considerations are interconnected. When they are evaluated separately, each discipline is effectively working with a partial view of the project. That partial view is what creates the conditions for conflict to emerge later.
Parallel evaluation allows those relationships to be considered together. Instead of asking whether a project is consistent with land use in isolation, the system can evaluate that consistency alongside environmental constraints and existing rights. Conflicts that would otherwise emerge in later stages can be identified before they are embedded in the project.
This does not eliminate the need for discipline-specific review. It changes the starting point of that review. Each discipline begins with a more complete understanding of how the project interacts with other constraints, reducing the likelihood that new information will invalidate prior conclusions.
The practical effect is a reduction in rework and a more predictable progression through the process.
Achieving this shift requires redefining the role of the system itself.
Most current systems used in permitting are designed around visibility. They track where an application is, who is responsible for it, and what step comes next. This is useful for coordination and reporting, but it does not change how evaluation is performed. The system acts as a record of activity rather than a participant in the work.
The actual work—determining what applies, identifying conflicts, interpreting how constraints interact, and defining required actions—still happens outside the system. It happens through a combination of document review, map interpretation, internal consultation, and individual judgment. The system captures the outcome of that work, but it does not assist in producing it.
This distinction is important because it defines where effort is concentrated.
When systems are limited to tracking, every project requires the same reconstruction of context. Staff must determine what constraints apply, how those constraints interact, and what needs to be addressed, often by navigating multiple disconnected sources of information. Even when data exists digitally, it is not structured in a way that allows the system to use it. It must be interpreted and applied manually.
This creates two constraints.
First, it limits consistency. Because evaluation depends on individual interpretation, outcomes can vary based on experience, familiarity with local conditions, and the ability to connect information across sources. The system does not enforce a consistent starting point; it records decisions after they have already been made.
Second, it limits scalability. The system does not get more effective as more projects move through it, because it is not learning or applying logic. Each new project requires the same level of manual effort to determine what applies. The work does not compound or improve over time.
A modern system would change this by taking on a more active role in the evaluation process.
Tracking tells you where an application is. Performing work helps determine what should happen next.
This does not mean replacing human judgment or automating decisions that require interpretation. It means structuring information in a way that allows the system to assist in the foundational work that leads to those decisions.
Instead of requiring staff to assemble context from multiple sources, the system can bring forward relevant constraints based on how a project interacts with the landscape. Instead of requiring manual interpretation to determine what applies, the system can identify applicable conditions and surface them in a consistent way. Instead of waiting for each discipline to independently identify issues, the system can begin to highlight potential conflicts as soon as a project intersects known constraints.
This shifts the role of the system from passive record-keeping to active support.
It also changes how time is spent. Less time is required to determine what applies and what needs to be evaluated. More time can be spent on assessing tradeoffs, resolving conflicts, and making decisions that require judgment.
Over time, as more of this logic is structured and embedded, the system becomes more effective with use. It does not simply store information about past projects. It applies what has been learned in a consistent way across new ones.
That is the difference between a system that tracks work and one that participates in it.
This shift becomes tangible when looking at how geospatial data is currently used.
Geospatial layers are one of the primary ways that information about land and resources is represented. They define where constraints exist, but they do not define how those constraints operate. A layer may indicate the presence of a habitat boundary, a land use designation, an existing right-of-way, or a withdrawal area, but it does not carry the logic needed to determine what that means for a specific project.
As a result, the layer serves as a reference point rather than a participant in the evaluation process.
Most geospatial systems are designed to display information, not to apply it.
When a project intersects a layer today, the system shows that intersection. It may expose attributes tied to that feature, but it does not translate those attributes into action. The reviewer still has to determine what the intersection means, what requirements it triggers, and how it interacts with other constraints. That determination typically requires moving outside the system—into planning documents, guidance manuals, prior decisions, or institutional knowledge.
The work happens after the intersection is identified, not within it.
This creates a gap between information and application. The system can tell you that something exists, but it cannot tell you what to do about it.
That gap is where much of the manual effort in permitting lives.
A different approach would treat these layers not as static overlays, but as objects that can participate in the evaluation process.
Instead of simply indicating that a constraint exists, the object would carry information about how that constraint behaves when a project intersects it. It would not just describe the feature—it would encode how that feature is applied.
At a basic level, this means the object can translate intersection into implication. If a project overlaps a given constraint, the system can identify what that overlap triggers. It can surface conditions, identify required actions, and begin to outline what needs to be addressed.
At a more advanced level, it allows those objects to interact with one another. Constraints rarely operate in isolation. A land use designation may interact with habitat protections. A right-of-way may intersect with resource constraints. Today, those interactions are discovered over time as different disciplines evaluate the project independently.
When constraints are structured as objects with logic, those interactions can begin to be evaluated together. The system can identify where conditions overlap, where requirements conflict, and where additional analysis is likely to be required.
This moves the system from displaying information to organizing and applying it.
This does not eliminate the need for interpretation, but it changes where interpretation is required.
Instead of asking each reviewer to determine from scratch what a constraint means and how it applies, the system provides a structured starting point. It ensures that the same baseline logic is applied consistently across projects, while still allowing staff to exercise judgment where needed.
It also reduces dependence on fragmented sources of information. Today, the logic that governs how constraints are applied is often distributed across multiple documents, policies, and informal practices. Structuring that logic within the system brings those pieces together in a way that is accessible and repeatable.
There is also a timing component to this shift.
When layers function only as context, their implications are often evaluated later in the process, after a project has already been shaped. When they function as objects that carry logic, those implications can be surfaced earlier, at the point where decisions are still flexible.
This aligns directly with the broader shift from sequential to parallel evaluation. Instead of waiting for each discipline to interpret its own set of constraints, the system can begin to expose how those constraints interact as soon as a project is defined.
Over time, as more constraints are structured in this way, the system becomes more capable of performing the groundwork of evaluation.
It can identify what applies.
It can highlight where issues are likely to emerge.
It can organize the set of conditions that need to be addressed.
This does not turn the system into a decision-maker. It turns it into a consistent, structured participant in the evaluation process.
That distinction matters. A system that only displays information requires every project to be interpreted from scratch. A system that can apply information allows each project to start from a shared understanding of how constraints operate.
That is the difference between layers as context and objects that perform work.
For agencies, this changes both how projects are reviewed and how staff time is allocated.
Under the current system, a significant portion of effort is spent assembling information and determining how it applies. This requires navigating multiple sources, interpreting incomplete data, and reconciling inputs from different disciplines over time. Even when data exists in digital form, it is often fragmented across systems, formats, and documents that were never designed to work together.
As a result, the early stages of review are less about evaluating a project and more about constructing a working understanding of it. Staff must determine what constraints apply, how those constraints interact, and what needs to be addressed before meaningful evaluation can even begin.
That work is necessary, but it is also repetitive.
Each new project requires the same process of reconstruction, often with only incremental differences. The system does not retain or apply that understanding in a way that meaningfully reduces effort on subsequent projects. It records outcomes, but it does not operationalize the logic behind them.
By structuring that information within the system, much of this upfront work can be reduced.
Instead of starting from a blank slate, staff begin with a clearer, more complete picture of how constraints interact and what issues are likely to require attention. The system can surface relevant conditions, identify potential conflicts, and organize the set of considerations that need to be evaluated.
This changes the entry point of review.
Rather than asking “what applies here,” the question becomes “how should we evaluate what already applies.” That shift reduces time spent gathering and interpreting information and allows staff to engage more directly with the substance of the project.
It also changes how work is distributed across the lifecycle of a project.
Under the current structure, a large portion of effort is concentrated in later stages, where new information forces revision of prior work. By moving more of that discovery earlier, the overall workload becomes more balanced. Issues are identified when they are easier to address, and the need to revisit earlier decisions is reduced.
This does not eliminate complexity, but it changes how that complexity is managed.
Consistency is another area where the impact becomes clear.
When evaluation depends on individual interpretation, similar projects can be approached differently depending on who is reviewing them and how they interpret the available information. This variability is not necessarily incorrect, but it creates uncertainty for both agencies and applicants.
By structuring how constraints are identified and applied, the system can provide a consistent baseline across projects. Staff still exercise judgment, but they do so from a shared starting point rather than reconstructing that starting point independently each time.
This does not reduce the importance of expertise. It changes where that expertise is applied.
Instead of spending time assembling and interpreting information, staff can focus on evaluating tradeoffs, resolving conflicts, and making decisions that require context and judgment. The role shifts from information processing to decision-making.
Over time, this also improves the system’s ability to scale.
As more logic is structured and applied consistently, the system becomes more effective with use. It does not simply store information about past projects. It applies that information in a way that reduces the effort required to evaluate new ones.
The result is not just a faster process. It is a more predictable and more consistent one, where effort is directed toward the parts of the work that actually require expertise.
For developers, the primary change is earlier visibility into risk.
Under the current system, much of the information that determines project feasibility already exists, but it is not structured in a way that allows it to be easily understood. Data is fragmented across agencies, formats, and documents, and even when it is accessible, it does not reflect how constraints interact. That interaction is often what determines whether a project is viable.
As a result, meaningful risks are frequently discovered only after a project has entered the permitting process. By that point, site selection decisions have been made, preliminary design work has been completed, and time and capital have already been committed. When fundamental conflicts emerge at that stage, they do not just slow the project down—they can force it to be reworked or abandoned entirely.
This creates a mismatch between when decisions are made and when risk is actually understood.
By structuring and exposing constraints earlier, a different decision process becomes possible.
Developers can evaluate potential sites with a more complete understanding of what challenges are likely to arise, before committing resources to engineering, design, and formal application. Instead of relying on incomplete information and discovering issues later, they can assess feasibility in a more informed and deliberate way at the outset.
This does not eliminate uncertainty, but it changes its timing.
Risk is not removed, but it is surfaced earlier, when it is easier to respond to. A site that presents fundamental conflicts can be avoided before significant investment is made. A site that is viable but constrained can be approached with those constraints already understood and incorporated into planning.
This shift has a direct impact on how capital is deployed.
Under the current model, capital is often committed in stages that assume a level of feasibility that has not yet been fully tested. When constraints emerge later, that capital is exposed to rework, delay, or loss. By moving feasibility forward in the timeline, developers can allocate resources with a clearer understanding of risk.
This leads to better alignment between project selection and project viability.
It also changes the nature of due diligence.
Today, due diligence is often fragmented, with different aspects of feasibility evaluated at different points in time. Land status, environmental constraints, infrastructure considerations, and regulatory requirements are not always assessed together. This makes it difficult to form a complete picture of the project early on.
A system that integrates these constraints allows due diligence to be more cohesive. Instead of evaluating factors in isolation, developers can understand how they interact and where conflicts are likely to arise.
The question shifts from “can this project be permitted” to “should this project be pursued.”
This is a fundamental shift.
It moves feasibility from something that is confirmed late in the process to something that is assessed early. It allows developers to make decisions with a clearer understanding of both opportunity and constraint, reducing the likelihood of advancing projects that are not viable.
The result is not just fewer delays. It is a more disciplined approach to project selection and a clearer alignment between effort, capital, and outcome.
As more of the underlying logic is structured within the system, its role begins to shift from coordination to evaluation.
Today, adjudication is not a single step. It is the cumulative outcome of multiple disciplines evaluating a project over time, each contributing a piece of understanding based on when and how they encounter the project. The system itself does not perform adjudication. It organizes the sequence through which adjudication happens.
That distinction matters.
Most of what is considered “adjudication” today is not decision-making in the strict sense. It is the work that leads up to a decision—identifying what constraints apply, determining what needs to be addressed, and assembling the information required for a decision to be made. This work is procedural, but it is performed manually because the system does not carry the logic needed to do it.
Each project effectively requires building that foundation from scratch.
When that logic is structured and embedded, the system can begin to take on part of that responsibility.
It can identify which constraints are relevant based on how a project intersects the landscape. It can organize those constraints into a coherent set of considerations. It can outline what needs to be addressed before a decision can be made, rather than waiting for that to be assembled over time through sequential review.
This does not eliminate the need for discipline-specific input. It changes how that input is coordinated.
Instead of each discipline independently determining what applies and introducing that information at different stages, the system can bring forward a more complete picture at the outset. Disciplines are no longer building understanding in isolation. They are evaluating within a shared context that has already been partially structured.
This is where the concept of adjudication begins to change.
A system that can consistently identify what applies, organize what needs to be addressed, and surface where conflicts exist is no longer just tracking a process. It is participating in the process of determining outcomes.
That does not mean the system is making decisions.
It means the system is doing the work that allows decisions to be made more effectively.
There is also a consistency component to this shift.
Today, the way constraints are identified and applied can vary depending on who is reviewing the project, what information they prioritize, and how they interpret it. This variability is often necessary, but it also introduces uncertainty.
When the underlying logic is structured within the system, there is a consistent baseline for how constraints are identified and applied. Staff still exercise judgment, but they do so from a shared starting point. This reduces variability in the early stages of evaluation and makes the overall process more predictable.
Over time, as more constraints and regulatory logic are encoded, the system becomes more capable of handling the procedural aspects of adjudication.
It can determine what applies to a project, bring those constraints together, and structure how they need to be evaluated before a decision is made.
This reduces the amount of manual effort required to move a project from submission to decision, not by removing steps, but by structuring them in a way that can be applied consistently.
The role of staff does not diminish in this model. It becomes more focused.
Instead of spending time assembling information and determining what applies, staff can focus on evaluating tradeoffs, resolving conflicts, and making decisions that require context, judgment, and accountability.
The system handles the groundwork. The agency retains the authority.
This is the distinction.
A system that tracks workflow supports a process.
A system that structures and applies logic begins to support adjudication.
That is what a modern permitting system ultimately needs to become.
The permitting system is not slow because it is being poorly executed. It is slow because of how it is structured and when it introduces critical information.
Efforts to improve permitting that focus on making the existing process more efficient do not address this issue. They operate within a structure that is designed to surface information late and require iteration to resolve it. As long as that structure remains in place, the same patterns will continue to emerge, regardless of how much individual steps are optimized.
This is why so many reform efforts produce limited results. They target symptoms rather than causes. They reduce friction within individual steps, but they do not change how information flows through the system or when key constraints are identified.
A structural problem cannot be solved by optimizing individual steps within it.
The current system produces delay because it defers understanding. It introduces critical information after a project has already been shaped, forcing adjustment rather than informing direction. That pattern is not incidental. It is embedded in how the process is organized.
Trying to make that process faster does not change what it produces. It only compresses the timeline in which the same conflicts, revisions, and uncertainties occur.
A different outcome requires a different approach.
It requires moving evaluation earlier, structuring how constraints are applied, and allowing the system itself to participate in the work of determining what applies before decisions are locked in. It requires shifting from a model where risk is discovered over time to one where it is surfaced as early as possible.
This is not a matter of improving coordination or increasing efficiency. It is a matter of redefining how the process works.
Until that shift happens, the conversation around permitting will continue to circle the same ideas—faster reviews, better coordination, more resources—without addressing the underlying cause of delay.
The question is not how to make the current system faster.
It is what a permitting system would look like if it were designed today.