Use Choice Modeling to Enhance Public Procurement Before AI Drops a Grenade
When marketing techniques meet public sector decision-making, better outcomes emerge for both businesses and taxpayers.
By Daniel J. Finkenstadt, PhD with ChatGPT Deep Research and Claude 3.7 Sonnet
Problem Statement
Government procurement faces a critical challenge: decision-makers cannot reliably articulate how they weigh competing priorities when selecting contractors. Research reveals a persistent gap between what procurement officers say they value and how they actually make decisions, leading to inefficient acquisitions, requirement misalignment, and costly bid protests. When Thompson (2022) studied 30 Air Force contracting officers, she found that not a single one could accurately predict their own decision-making behavior. This disconnect has far-reaching implications. First, when source selection criteria don't reflect real valuation, acquisitions fail to maximize value for taxpayers. Second, the inconsistency between stated and actual preferences creates vulnerability to protests, which delay critical procurements and cost millions. Third, requirements developers lack quantitative methods to prioritize features that truly deliver mission value, resulting in over-specified, costly products that underdeliver on critical capabilities.
The AI Grenade
This problem is poised to become significantly more acute as organizations rush to implement AI-driven or AI-augmented decision systems in procurement. Without first understanding how humans actually make trade-off decisions—not just how they claim to make them—AI systems will simply institutionalize and amplify existing biases and inconsistencies. The naive assumption that AI can extract decision rules from historical data ignores the fundamental issue that the data itself reflects decisions made through unstated, unacknowledged preference structures. Algorithmic approaches to source selection that rely on a priori rank ordering of criteria without empirically validated weights will not only perpetuate these problems but make them more opaque and resistant to correction. When decision-makers cannot articulate their own value structures accurately, any AI system trained on their stated preferences rather than observed choices will inevitably diverge from true organizational values.
Traditional approaches relying on subjective expert judgment or simplistic scoring systems have proven inadequate for complex, multi-attribute decisions where intangible factors like quality and risk must be weighed against concrete metrics like cost and schedule. This paper addresses how choice-based conjoint analysis—a method proven in commercial product development—can bring empirical rigor to government procurement decision-making, revealing true preferences and enabling data-driven prioritization. By establishing ground truth on human value structures first, organizations can then develop AI systems that genuinely reflect their actual priorities rather than propagating the illusion of rationality over deeply flawed preference architectures.
The Power of Observed Preferences
Choice modeling, particularly choice-based conjoint (CBC) analysis, has emerged as a powerful methodology for understanding how people value the various components of a decision. Rather than directly asking consumers what features matter most to them, CBC infers preferences from observed choice behavior – a "back-door" approach that taps into the subconscious trade-offs people make when evaluating alternatives.
A striking example comes from a 2022 Naval Postgraduate School study where 30 Air Force contracting officers participated in a CBC-based mock source selection. The study first elicited each officer's stated ranking of evaluation factors, then compared it to the implied weights from their choices in conjoint scenarios. The results were revealing: "None of the subjects in this study could accurately order attribute importance in stated form to match their actual choices in simulated source selections" (Thompson, B. (2022).
Tukiainen, et al (2024) used these methods to ask "what are the priorities of bureaucrats?" They found that what procurement officers said was important differed dramatically from what their choices revealed they actually valued.
This disconnect highlights a powerful insight: understanding how people make choices—rather than what they say drives their decisions—is fundamental to both product development and public procurement. By applying the same analytical framework to both domains, organizations can dramatically improve decision quality and resource allocation.
The Evolution of Choice Modeling
Choice modeling, particularly choice-based conjoint analysis, originated in 1960s mathematical psychology and econometrics. Luce and Tukey introduced "conjoint measurement" in 1964, while economists like McFadden developed discrete choice theory in the 1970s. Marketing researchers, notably Paul Green and V. Srinivasan, began applying these techniques to consumer product problems in the early 1970s, recognizing their potential to predict buyer behavior and quantify the value of product features (Green & Rao, 1971; Green & Srinivasan, 1978).
The term "conjoint" refers to how respondents evaluate combinations of features considered jointly rather than in isolation. While early studies in the 1970s used full-profile cards and rankings to measure preferences, methodological advances led to more realistic choice-based approaches. By 1985, firms like IBM and Sawtooth Software had introduced dedicated software, and by the 1990s, these techniques were widely adopted across industries (Axess Research, 2021).
Modern CBC, a form of discrete choice experiment, was formalized by research from Louviere and Woodworth in the 1980s, enabling respondents to simply choose their preferred option from sets of alternatives—mirroring real purchase decisions—instead of rating or ranking each attribute level (Louviere & Woodworth, 1983). Combined with improvements in statistical techniques and computing power, these advances have made conjoint analysis a cornerstone of market research and beyond.
The Commercial Edge: How CBC Drives Product Innovation
In commercial contexts, CBC analysis has proven itself as a powerful tool for product development and innovation. By presenting consumers with realistic choice tasks (selecting among product profiles with varying features and prices), firms can quantify buyer trade-offs and determine the value of each attribute. Conjoint analysis can reveal how much value customers place on an added feature (like longer battery life or extra safety features) in monetary terms or relative preference.
The technique outputs utility scores (part-worths) for attribute levels, which indicate the contribution of each feature to the overall appeal of a product. These insights help designers and managers answer key questions: Which combination of features maximizes customer appeal? Which attributes are must-haves and which can be traded off? What price premium can a new feature command?
CBC provides several critical benefits for product design and customer-centric innovation:
Identifying optimal feature bundles: CBC pinpoints the "ideal bundle" of features that yields the highest consumer preference, as well as the pricing sweet spot that balances attractiveness and profitability (Corridor Business, 2021). This guides R&D toward configurations that resonate with target customers.
Quantifying trade-offs: By forcing respondents to make choices, CBC reveals how consumers prioritize features. A conjoint study might show, for instance, that customers would give up a minor feature to save cost, but not sacrifice core performance for a discount (Conjointly, 2021). This information helps optimize feature sets for value.
Market simulation and forecasting: With estimated part-worths, companies can simulate market scenarios—predicting market share if Product A versus Product B is introduced, or testing different price points (Corridor Business, 2021; Conjointly, 2021). This allows evidence-based forecasting of how product changes or new concepts could perform in a competitive market.
Customer segmentation: By analyzing preference patterns, conjoint data identifies segments of customers with similar value structures. For example, one segment might be very price-sensitive while another values premium features, informing targeted marketing or tiered product offerings (Sawtooth Software, 2021).
What makes CBC particularly valuable is that it avoids asking consumers direct questions like "How important is X to you?"—an approach prone to error. Studies show that people struggle to accurately self-report the weight they place on individual factors (Conjointly, 2021). As one practitioner notes, "People are not accurate in assessing how much any one factor influences their purchase decision, but by having many participants evaluate multiple groups of bundles, we can determine the importance and value of individual components" (Conjointly, 2021). CBC observes choices, providing a more reliable picture of preferences by mimicking real decision processes.
Flipping the Script: CBC in Government Procurement
Given its success in illuminating customer preferences, it's natural to ask whether CBC's techniques can be applied to government procurement. Public sector acquisition, especially for complex purchases, is fundamentally a multi-criteria decision problem: officials must evaluate proposals based on multiple attributes (cost, technical capability, past performance, etc.) and decide which offer provides the best value.
The core insight is that the same conjoint principles used to design products around customer preferences can be used to design procurement choices around stakeholder values. Rather than assuming or heuristically assigning importance to requirements, officials can simulate decision scenarios and let their choices inform the weighting of evaluation criteria.
Requirements Development: Finding the Value Sweet Spot
Developing requirements for a government procurement often involves balancing performance, cost, schedule, and risk considerations. Here, conjoint analysis serves as a requirement trade-space exploration tool. For example, requirements writers could present stakeholders (end-users, program managers, etc.) with several hypothetical solutions that meet the basic need but with varying levels of capability and cost.
By asking stakeholders to choose which concept they prefer, analysts can infer the relative value placed on different requirement levels. This quantitatively answers questions like: Is an extra 10% performance worth a 15% higher cost? Would users prefer a slightly lower capability if it means faster delivery?
Using CBC in this early stage helps ensure that the requirements are customer-focused – where the "customer" is the mission end-user or the agency's mission objectives. It brings a value-based design approach to government specifications, akin to how commercial product teams use conjoint to design for consumer value.
This method also uncovers hidden priorities. Often, different stakeholders have different implicit preferences (e.g., operators might value capability highly while budget officers emphasize cost). A conjoint exercise can reveal these differences and facilitate a more informed discussion to reach a balanced requirement set.
Source Selection: Revealing True Decision Weights
Perhaps the most direct application of CBC in public procurement is in source selection – how proposals are evaluated and a winner is chosen. In U.S. federal procurement, agencies must announce evaluation factors and their relative importance in the solicitation, and then adhere to those in making the award. Research suggests, however, that there is often a disconnect between the stated importance of evaluation criteria and the actual decision-making behavior of source selection teams (Finkenstadt, 2020; Thompson, 2022).
CBC offers a way to empirically derive the weights for source selection criteria by observing choices in a controlled setting. One approach is to create a simulated source selection exercise: present acquisition professionals with hypothetical proposal scenarios where each proposal has different attribute levels (e.g., different bid price, different past performance rating, different delivery time), and ask which proposal they would award the contract to. By analyzing these choices, one can calculate the part-worth utilities for each evaluation factor level and the overall relative importance of each factor in driving the award decision.
Thompson's 2022 Naval Postgraduate School study demonstrated this disconnect clearly. In every case, the trade-offs the contracting officers made when choosing a winner differed from their initial stated priority order. This stark finding confirms the value of CBC – it reveals true choice behavior that might otherwise be masked by official doctrine or cognitive biases.
Recommended by LinkedIn
A key contribution to this field comes from Finkenstadt and Hawkins (2016), who first introduced the "Quality-Infused Price" (QIP) concept as an experimental framework to bridge the gap between price and quality in best-value procurements. They identified that current source selection methods often fail to quantitatively state what really matters to the government and how to best quantify it, laying the theoretical foundation for a more rigorous approach to evaluation.
However, it wasn't until Finkenstadt's 2020 doctoral dissertation at the University of North Carolina that the QIP methodology was fully developed. In this groundbreaking work, Finkenstadt psychometrically tested and validated the quality factors, then used CBC experiments with Sawtooth software to systematically monetize these factors through willingness-to-pay estimates. This rigorous approach transformed QIP from a conceptual framework to an empirically validated methodology with practical applications.
The fully developed QIP methodology essentially provides a way to monetize trade-offs between service quality and price. Using conjoint analysis outputs, this approach calculated the willingness-to-pay for incremental improvements in various quality attributes of a service contract. This allowed for adjusting proposal prices by a quality factor, producing a single integrated metric for evaluation. For example, a slightly higher-priced offer with much better performance could be adjusted to an "effective price" that reflects its added value. By converting latent quality ratings into dollar equivalents derived from actual preference data, source selections can be made more rational and defensible.
Real-world experimentation with these ideas is growing. A recent international study by Tukiainen et al. (2024) conducted conjoint experiments with over 900 procurement officials in Europe. The findings illustrate how CBC can illuminate decision priorities: officials were found to prioritize avoiding certain negative outcomes even more than achieving positives – for instance, avoiding unexpectedly high costs was more important than simply seeking the lowest initial price, and avoiding suppliers with poor past performance was "the most important feature" in their choices. Meanwhile, factors like litigation risk or minor local preferences mattered less. This kind of insight showcases CBC's potential to unravel the "black box" of bureaucratic decision-making in a systematic way.
The Benefits of CBC in Public Procurement
Adopting choice-based conjoint methods in government procurement could yield several compelling benefits:
Empirical weighting of criteria: CBC provides a data-driven basis for setting weights or relative importance of evaluation factors. This reduces reliance on arbitrary or solely experience-based weighting and ensures the evaluation scheme truly reflects what the agency values in an award decision. In turn, this alignment can make source selection decisions more transparent and defensible, directly addressing procurement objectives like value for money and fairness. When an award decision is challenged, the agency can demonstrate that its criteria weighting was grounded in systematic analysis of decision-maker preferences.
Enhanced decision consistency: By revealing any disconnect between stated priorities and actual choices, conjoint analysis helps procurement teams self-calibrate. If, for example, cost is found to dominate decisions despite officially being "equal" to technical merit, the agency can adjust either its internal decision approach or the stated criteria to avoid contradiction. This leads to more consistent evaluation practices, where the pre-solicitation weighting matches the actual trade-offs made during source selection.
Quantification of intangibles: Government buyers often struggle to quantify intangible factors like quality, innovation, or risk. CBC can assign utility values to qualitative attributes, effectively translating them into a common preference scale. In the best case, methods like the Quality-Infused Price allow conversion of those utilities into dollar terms. Even if not converted to dollars, knowing that (for instance) a "High" past performance rating is worth twice as much as a "Medium" in utility to evaluators is invaluable for decision modeling.
Structured stakeholder input: Incorporating conjoint exercises in early requirement setting or strategy development forces a structured consideration of trade-offs by stakeholders. It encourages dialogue about what the true priorities are before finalizing an RFP. This can surface differences in perspective (e.g., end-user vs. contracting officer priorities) and drive a consensus built on analytic results rather than hierarchy.
Innovation in acquisition methods: Embracing CBC is part of a broader push toward innovative, data-informed acquisition. It parallels efforts in government to use evidence and modeling to drive decisions. By piloting CBC approaches, agencies signal an openness to modern analytical tools. Moreover, it aligns with the concept of "best value" procurement by providing a rigorous way to evaluate what "best value" truly means in measurable terms.
Reduced protest risk: Perhaps most importantly from an execution standpoint, a source selection that is structured on empirically derived preferences is less likely to result in surprises that give losing bidders grounds to protest. If the criteria weights are set based on how decisions makers actually value them, the chosen winner is more likely to be justifiable under those published criteria. As one DoD study concluded, providing a CBC-based framework for source selection can mitigate weaknesses in how attribute importance is developed and ultimately "reduce the risks of protests" (Thompson, 2022).
Challenges in Implementation
Despite its promise, transferring CBC methods into government procurement is not without challenges:
Regulatory and policy fit: Procurement laws require transparency in how decisions are made. Any use of CBC must be incorporated before proposal evaluation (during planning), since once proposals are received, evaluators generally must stick to the published criteria and process. Thus, CBC is a decision-support tool for planning, not a black-box used in lieu of the documented evaluation process.
Complexity and expertise: Designing and analyzing a CBC study requires expertise in survey design and statistical modeling. Most procurement offices do not currently have this skill set. Additionally, constructing realistic choice scenarios (especially for complex services or R&D projects) can be challenging – it requires understanding the trade-space well enough to set up plausible proposal profiles.
Stakeholder acceptance: Even if CBC analysis indicates different weights or preferences than traditional thinking, convincing stakeholders to trust and adopt those findings can be difficult. Some officials might be skeptical of statistical modeling, or uncomfortable ceding their judgment to a survey of colleagues. Overcoming this requires change management and education on the method's validity.
Hypothetical bias and validity: As with any survey-based method, results depend on respondents taking the exercise seriously and imagining the scenarios as if they were real. Researchers have attempted to mitigate this by introducing an incentive-alignment or realism prompt. For example, Finkenstadt (2020) found that adding an expert-scrutiny condition encouraged procurement respondents to behave more realistically in conjoint tasks, without significantly skewing their priority rankings.
The Path Forward: From Product Design to Procurement Excellence
Choice-based conjoint analysis has revolutionized how companies develop products by centering decisions on measured customer preferences. Its ability to quantify trade-offs and predict decision outcomes has direct relevance to government procurement, where officials similarly juggle multiple factors to achieve best value. By adapting CBC to model the "choice" of a contract award or the selection of requirements, agencies can bring analytical rigor and customer-centric thinking into acquisition planning.
Academic research and pilot studies demonstrate that procurement professionals' preferences can be elicited and modeled much like consumer preferences, yielding insights into how criteria should be weighted and which outcomes matter most. The potential benefits – more coherent evaluations, data-backed justifications, and reduced misalignment between stated and actual priorities – align well with the objectives of transparency, value for money, and meeting mission requirements in public procurement.
Implementing CBC in the public sector must be done thoughtfully. Agencies should start with pilot programs on non-critical or repetitive procurement scenarios to build familiarity and validate the approach. Collaboration with researchers or use of established conjoint analysis tools can help bridge the skill gap. Results from conjoint studies should inform, not dictate, procurement strategy – providing a quantifiable reference point that decision-makers consider alongside policy mandates and expert judgment.
In summary, choice modeling offers a promising avenue to enhance customer-focused innovation in product design and mission-focused discipline in procurement. It underscores a common principle: whether in a competitive marketplace or a government acquisition, understanding what truly drives choices leads to better outcomes. By leveraging CBC to illuminate those drivers, commercial firms have improved their products – and now government buyers can improve how they define and deliver best value solutions for the public.
References
Axess Research. (2021). A short history of Conjoint Analysis. Retrieved from https://www.axessresearch.com/wp-content/uploads/2021/07/A-short-history-of-Conjoint-Analysis.pdf
Conjointly. (2021). What is Conjoint Analysis? (with examples). Retrieved from https://conjointly.com/guides/what-is-conjoint-analysis/
Corridor Business. (2021). From innovation to success with conjoint analysis. Retrieved from https://corridorbusiness.com/from-innovation-to-success-with-conjoint-analysis/
Finkenstadt, D. J. (2020). Perceived Service Quality and Value in B2G (Doctoral dissertation). University of North Carolina at Chapel Hill.
Finkenstadt, D. J., & Hawkins, T. G. (2016). #eVALUate: Monetizing service acquisition trade-offs using the Quality-Infused Price© methodology. Defense Acquisition Research Journal, 23(2), 202-228.
Green, P. E., & Rao, V. R. (1971). Conjoint measurement for quantifying judgmental data. Journal of Marketing Research, 8(3), 355-363.
Green, P. E., & Srinivasan, V. (1978). Conjoint analysis in consumer research: Issues and outlook. Journal of Consumer Research, 5(2), 103-123.
Louviere, J. J., & Woodworth, G. (1983). Design and analysis of simulated consumer choice or allocation experiments: An approach based on aggregate data. Journal of Marketing Research, 20(4), 350-367.
Orme, B. K. (2020). Getting Started with Conjoint Analysis (3rd ed.). Madison, WI: Research Publishers LLC.
Sawtooth Software. (2021). The CBC System for Choice-Based Conjoint Analysis Version 9. Retrieved from https://content.sawtoothsoftware.com/assets/0891a76f-93d3-4838-a38d-8ac0a2cda519
Thompson, B. (2022). Stated intentions vs. actual behavior: Choice-based conjoint in DoD source selections (MBA Professional Report, NPS-AM-23-002). Naval Postgraduate School. https://dair.nps.edu/handle/123456789/4779
Tukiainen, J., Blesse, S., Bohne, A., Giuffrida, L. M., Jääskeläinen, J., Luukinen, A., & Sieppi, A. (2024). What are the priorities of bureaucrats? Evidence from conjoint experiments with procurement officials. Journal of Economic Behavior & Organization, 227, 106716. https://doi.org/10.1016/j.jebo.2024.106716
Executive Director, Deloitte Center for Government Insights | Author of Bridgebuilders: How Government Can Transcend Boundaries to Solve Big Problems, available now.
5moBrilliant piece Dan!
Talent Acquisition Executive | GovCon Expert | Cleared Talent Expert | Recruiter of Techies & Acquisition Professionals
6moInteresting.
Accelerating acquisitions, mitigating protests, creating value | CEO Valid Eval
6moGreat fodder for the conversations to come, Daniel J. Finkenstadt. (For certain programs we support, there would be huge utility here to get after this at scale well left of rubric authoring.)
Contract Lead | Data Center Construction at Microsoft | 17+ Yrs in Contract Management | Legal Compliance & Risk Strategist | Real Estate Investor & Airbnb Host
6moGreat points here. The mismatch between stated criteria and actual decisions is something I’ve seen firsthand. Choice modeling feels like a smart way to get past the guesswork and actually align procurement with real priorities, especially before layering on AI. Thanks for bringing this to light.