Labels and Tags

Accountability (71) Adequate documentation (7) ADR in procurement (4) Allocation of risks (6) Best interest of government (11) Best practices (19) Best value (15) Bidder prejudice (11) Blanket purchase agreement (1) Bridge contract (2) Bundling (6) Cancellation and rejection (2) Centralized procurement structure (12) Changes during bid process (14) Clarifications vs Discussions (1) Competence (9) Competition vs Efficiency (29) Competitive position (3) Compliance (35) Conflict of interest (32) Contract administration (26) Contract disputes (4) Contract extension or modification (9) Contract formation (1) Contract interpretation (1) Contract terms (3) Contract types (6) Contract vs solicitation dispute (2) Contractor responsibility (20) Conviction (4) Cooperative purchasing (3) Corrective action (1) Cost and pricing (13) Debarment (4) Determinations (8) Determining responsibility (37) Disclosure requirements (7) Discussions during solicitation (10) Disposal of surplus property (3) Effective enforcement requirement (35) Effective procurement management (5) Effective specifications (36) Emergency procurement (14) eProcurement (5) Equitable tolling (2) Evaluation of submissions (22) Fair and equitable treatment (14) Fair and reasonable value (23) Fiscal effect of procurement (14) Frivolous protest (1) Good governance (12) Governmental functions (27) Guam (14) Guam procurement law (12) Improper influence (11) Incumbency (13) Integrity of system (31) Interested party (7) Jurisdiction (1) Justification (1) Life-cycle cost (1) Limits of government contracting (5) Lore vs Law (4) market research (7) Materiality (3) Methods of source selection (33) Mistakes (4) Models of Procurement (1) Needs assessment (11) No harm no foul? (8) Offer & acceptance (1) Other procurement links (14) Outsourcing (34) Past performance (12) Planning policy (34) Politics of procurement (52) PPPs (6) Prequalification (1) Principle of competition (95) Principles of procurement (25) Private vs public contract (17) Procurement authority (5) Procurement controversies series (79) Procurement ethics (19) Procurement fraud (31) Procurement lifecycle (9) Procurement philosophy (17) Procurement procedures (30) Procurement reform (63) Procurement theory (11) Procurement workforce (2) Procurment philosophy (6) Professionalism (17) Protest - formality (2) Protest - timing (12) Protests - general (37) Purposes and policies of procurement (11) Recusal (1) Remedies (17) Requirement for new procurement (4) Resolution of protests (4) Responsiveness (14) Restrictive specifications (5) Review procedures (13) RFQ vs RFP (1) Scope of contract (16) Settlement (2) Social preference provisions (60) Sole source (48) Sovereign immunity (3) Staffing (8) Standard commercial products (3) Standards of review (2) Standing (6) Stays and injunctions (6) Structure of procurement (1) Substantiation (9) Surety (1) Suspension (6) The procurement record (1) The role of price (10) The subject matter of procurement (23) Trade agreements vs procurement (1) Training (33) Transparency (63) Uniformity (6) Unsolicited proposals (3)

Saturday, September 6, 2014

Market research in R&D must be integrated with needs assesment and specification crafting

Procurement Reforms Reignite Feud Between Weapon Buyers and Testers
Pentagon procurement chief Frank Kendall is proposing changes in how weapon systems are tested. He suggests tests should be performed earlier in the design cycle than is customarily done.

The sooner the testing, he says, the sooner the Pentagon will catch problems before the military sinks huge amounts of money into a program. This would help avert expensive redesigns and modifications — a costly lesson the Pentagon learned over the past decade from the F-35 fighter program.

Kendall also believes that earlier "developmental" testing can help reduce the cost of "operational" testing — realistic live-fire drills that are mandated by law before any weapon systems goes into production.

"We are trying to have more efficient test programs overall, get the data out before we make production decisions. That's critical to design stability," Kendall told National Defense after delivering a speech at a recent conference on acquisition reform. Program managers should have more data about how their systems perform before they begin operational tests, Kendall said. "We will continue to try to blend operational testing and developmental testing."

Kendall's deputy, Darlene Costello, in a speech to a test-and-evaluation industry conference last month, explained the rationale for planned changes related to weapon tests. There is now a "big emphasis on what we do before an RFP goes out. ... Testing is a big part of that," said Costello, who is director of program and acquisition management.

But the Pentagon's plan to wring out more "efficiency" from testing has stirred old animosities between the procurement shop and the office of the director of operational test and evaluation — which operates independently from the procurement office and reports directly to the secretary of defense. DOT&E, as the testing office is known, has been a thorn in the side of many big-ticket weapon programs. Kendall's comments are raising fears in the testing community that their budgets will be gutted.

For many years, program managers have sought to have more control over test reports before DOT&E releases them to Congress and the news media. Procurement officials would rather have test results reported directly to them and have greater say in what information is disclosed.

After operational testers gave the Navy’s littoral combat ship a scathing review in their fiscal year 2012 annual report to Congress, service officials were unprepared for the political damage the report would cause. Testers concluded the ship lacked firepower and was not survivable in high intensity conflict.

Speaking at the same conference, Director of Operational Test and Evaluation J. Michael Gilmore pushed back on the notion that testing costs should be treated as expendable overhead. "How are you going to compress testing in this era of constrained budgets? I think it's a mistake. It accepts the premise that testing is driving increased cost," he said. "The facts don't support that premise. We want to make sure we do testing as rigorously and as often as we can."

Infighting between program officers and testers is par for the course at the Defense Department. Kendall's predecessor Ashton Carter commissioned an independent team in 2011 to probe complaints that developmental and operational testing contribute to excessive cost and schedule slippages in programs.

At the root of the problems that have plagued major Pentagon programs is the way the military services define their requirements, said Gilmore. "Oftentimes requirements are defined in technical specifications. That's OK, but insufficient to ensure a system provides military utility." He cited the Navy's P-8 maritime surveillance aircraft as a case in point. In operational tests last year, the aircraft showed it could fly, but it was not able to perform key missions like wide-area antisubmarine surveillance. Gilmore blamed the flap on the Navy because it had not specified antisubmarine warfare as a "key performance parameter."

Poorly written requirements continue to haunt programs, he said. "In this wonderful town, common sense doesn't play a role." Gilmore said many of the key parameters for the F-35 joint strike fighter relate to aircraft performance and payload capacity. "If we were just going to test KPPs, we would not fly combat missions, we would not penetrate air defenses, we would just fly off the carrier and back. ... How meaningful are these requirements?"

The Army, he said, wasted billions of dollars on a future combat system and on digital radios that never materialized. Its leaders were guilty of "approving requirements that are not achievable." Some programs get to operational testing and still don't have concepts for how they will operate, he said. "If the testing community played a more prominent role in requirements — and that's a big if — perhaps we could have avoided these mistakes," Gilmore said.

Gilmore suggested major programs should have a firm "test and evaluation master plan" before an RFP is written. "I have never understood when I am told we cannot do a T&E master plan until we get a response back from contractors. How can you generate a meaningful RFP without a draft test plan? Just as importantly, how can you evaluate the responses industry provides? I don't get it. What I fear is that some of these RFP evaluations are check mark exercises. I hope that's not the case."
The handy DAU ACQuipedia site (I trust it if my browser baulks; up to you) describes market research in the federal acquisition arena, probably a good best practices guide.

This is only an excerpt, which I, as usual, may feel free to cut, paste, rearrange, omit, etc.: read the whole piece at the link.
To understand the subject of market research we must begin within its definition of being described as a “continuous process for gathering data.” That process takes shape from both a strategic and a tactical vantage point. Strategic market research is that overarching process of market “surveillance” that will take place continuously throughout the entire acquisition lifecycle. From the early stages of the Material Solution Analysis Phase through to the final steps in the Operations and Support Phase; acquisition workforce members in all disciplines are engaged at varying degrees of market research to remain knowledgeable in market developments that may meet government requirements.

Market research is conducted by all members of the acquisition team including contracting (business advisors), program managers, engineers, logisticians, legal staff, test and evaluation personnel, cost specialists, the customer, etc. Though each may focus attention toward specific aspects, their ultimate goal is to pull together the necessary information to be analyzed so an informed decision can be achieved. While it is necessary for every acquisition as stated earlier, the extent of research that may be required is dependent on five variables which are, the complexity of the acquisition, how urgent the need, the estimated dollar value, how readily information is available, and past experience with the product or service being acquired.

In most occupations, they say “the job isn’t done til the paperwork’s complete!” Well, in contracting that axiom is especially true. There are a number of functions and tasks contracting professionals must engage in to exercise prudent use of taxpayer dollars and in many instances the trail of logic must be fully supported and documented for the official record to stand up to the possible scrutiny of public interest. Market research is one such task that the FAR both suggests and requires documentation.
I tend to think of R&D as practical market research: since the need is fuzzy to begin with, it requires a constant information loop to assure what the need is at the time of acquisition, what the market has, what the market has the ability to provide if it doesn't have it, and what's it going to cost (is it "worth" it?). 

It is certainly not a check-the-box exercise, as Director Gilmore mentioned. And it should never be one, even if acquiring routinely used products, such as the portable radio equipment the FBI had decided to purchase from Motorola on a sole-source basis.

No comments: