Labels and Tags

Accountability (71) Adequate documentation (7) ADR in procurement (4) Allocation of risks (6) Best interest of government (11) Best practices (19) Best value (15) Bidder prejudice (11) Blanket purchase agreement (1) Bridge contract (2) Bundling (6) Cancellation and rejection (2) Centralized procurement structure (12) Changes during bid process (14) Clarifications vs Discussions (1) Competence (9) Competition vs Efficiency (29) Competitive position (3) Compliance (35) Conflict of interest (32) Contract administration (26) Contract disputes (4) Contract extension or modification (9) Contract formation (1) Contract interpretation (1) Contract terms (3) Contract types (6) Contract vs solicitation dispute (2) Contractor responsibility (20) Conviction (4) Cooperative purchasing (3) Corrective action (1) Cost and pricing (13) Debarment (4) Determinations (8) Determining responsibility (37) Disclosure requirements (7) Discussions during solicitation (10) Disposal of surplus property (3) Effective enforcement requirement (35) Effective procurement management (5) Effective specifications (36) Emergency procurement (14) eProcurement (5) Equitable tolling (2) Evaluation of submissions (22) Fair and equitable treatment (14) Fair and reasonable value (23) Fiscal effect of procurement (14) Frivolous protest (1) Good governance (12) Governmental functions (27) Guam (14) Guam procurement law (12) Improper influence (11) Incumbency (13) Integrity of system (31) Interested party (7) Jurisdiction (1) Justification (1) Life-cycle cost (1) Limits of government contracting (5) Lore vs Law (4) market research (7) Materiality (3) Methods of source selection (33) Mistakes (4) Models of Procurement (1) Needs assessment (11) No harm no foul? (8) Offer & acceptance (1) Other procurement links (14) Outsourcing (34) Past performance (12) Planning policy (34) Politics of procurement (52) PPPs (6) Prequalification (1) Principle of competition (95) Principles of procurement (25) Private vs public contract (17) Procurement authority (5) Procurement controversies series (79) Procurement ethics (19) Procurement fraud (31) Procurement lifecycle (9) Procurement philosophy (17) Procurement procedures (30) Procurement reform (63) Procurement theory (11) Procurement workforce (2) Procurment philosophy (6) Professionalism (17) Protest - formality (2) Protest - timing (12) Protests - general (37) Purposes and policies of procurement (11) Recusal (1) Remedies (17) Requirement for new procurement (4) Resolution of protests (4) Responsiveness (14) Restrictive specifications (5) Review procedures (13) RFQ vs RFP (1) Scope of contract (16) Settlement (2) Social preference provisions (60) Sole source (48) Sovereign immunity (3) Staffing (8) Standard commercial products (3) Standards of review (2) Standing (6) Stays and injunctions (6) Structure of procurement (1) Substantiation (9) Surety (1) Suspension (6) The procurement record (1) The role of price (10) The subject matter of procurement (23) Trade agreements vs procurement (1) Training (33) Transparency (63) Uniformity (6) Unsolicited proposals (3)

Monday, July 11, 2011

Testing the evaluation of best value

Best value, value for money, best anything: like beauty, best is often in the eye of the beholder. Given that simple low price may not be the most optimal selection criteria when purchasing decisions extend beyond the realm of commercially standard products, especially when used to try to choose the most optimal service on offer, some amount of "best" must enter into the equation.

But also given that objective nature of the best choice is not always verifiable due to the subjective lens of the evaluator, to insure procurement integrity, a review of the best value evaluation process must require a more critical review of that process. More critical than the entirely deferential review from days of old, anyway.

The following three recent GAO cases suggest that, while a deferential review is still somewhat reverential, some rigorous examination of such evaluation is being conducted. The first case exemplifies deference reverence, and the other two hint at a more critical standard of review.

It should be remembered that most articles mentioned in this blog include only excerpts, often re-arranged, to give a general nature and taste of the original, while trying to keep the content short. You really need to read the original for the full context -- and to make your own evaluation of it. Remember, this is the internet, not an encyclopedia. Not that you should take an encyclopedia at face value, either.

Matter of: L-3 STRATIS, B-404865, June 8, 2011
L-3 STRATIS, the protester, argues that the agency’s evaluation of its proposal was unreasonable. We deny the protest.

The RFP, which was issued on March 24, 2010, contemplated the award of a 6-year indefinite-delivery/indefinite-quantity contract to the offeror whose proposal represented the best value to the government, with technical factors considered significantly more important than price in determining best value. The technical factors (and their corresponding weights) were as follows: (1) technical approach-
25 points; (2) technical qualifications of key personnel and technical staff-15 points; (3) corporate experience and past performance-20 points (10 points each for corporate experience and past performance); (4) program management-20 points; and (5) subcontracting-20 points.

After evaluating the final proposals, the source evaluation panel (SEP) furnished the source selection official (SSO) with a report summarizing the proposals’ strengths and weaknesses. After reviewing the SEP’s findings, the SSO determined that Perot’s proposal represented the best value to the government.

In his source selection determination, the SSO explained that while L-3’s proposal had demonstrated a number of strengths, one of which was the protester’s innovative plan [deleted] for delivering IT infrastructure services to agency users, the proposal’s strengths were offset by several concerns.

The SSO’s first concern was that the protester’s timeline for implementation of [deleted] was unclear, which called into question the value of the benefits associated with L-3’s [deleted] approach. The SSO went on to explain that he considered the lesser degree of uncertainty associated with Perot’s more clearly defined timeline to be worth a price premium.

[There were other issues, as well.]

The SSO concluded that the uncertainty regarding L-3’s [deleted] transition timeframe and its ability to implement ITIL v3, along with the vagueness concerning its use of subcontractors, was not mitigated by its lower price. The SSO further concluded that the advantages of Perot’s proposal in terms of a clear path forward, solid ITIL v3 grounding, and a defined small business subcontracting plan were worth a price premium of approximately 15 percent vis-à-vis L-3’s proposal.

The protester argues that the agency’s evaluation relied on unstated factors, that the agency ignored information contained in its proposal, and that the agency failed to conduct meaningful discussions. The protester maintains that these errors resulted in an unsupported best value tradeoff determination.

In reviewing protests objecting to an agency’s technical evaluation, our role is limited to ensuring that the evaluation was reasonable and consistent with the terms of the solicitation. As explained below, based on our review of the record here, we think that the agency’s evaluation was reasonable.

To the extent the agency believed that L-3’s proposal failed to provide sufficient detail regarding its timeline for implementation, L-3 argues that the agency should have raised the matter during the various rounds of discussions it held with the offerors. Having failed to do so, L-3 argues the discussions conducted by the agency were inadequate.

Although discussions must address at least deficiencies and significant weaknesses identified in proposals, the scope and extent of discussions are largely a matter of the contracting officer’s judgment. In this regard, we review the adequacy of discussions to ensure that agencies point out weaknesses that, unless corrected, would prevent an offeror from having a reasonable chance for award.

An agency is not required to afford offerors all encompassing discussions, or to discuss every aspect of a proposal that receives less than the maximum score, and is not required to advise an offeror of a weakness that is not considered significant, even where the weakness subsequently becomes a determinative factor in choosing between two closely ranked proposals.

While the weakness at issue may have served as a discriminator for the purpose of the SSO’s award decision, as previously indicated, the mere fact that a weakness becomes a determinative factor in choosing between two closely ranked proposals, does not mean that the agency was required to raise the issue during discussions.

Agencies are not required to “spoon-feed” offerors during discussions, but rather need only lead offerors into the areas of their proposals that require amplification or revision. Having sufficiently led L-3 to the area of concern in the first round of discussions, NRC was not required to raise the matter again in any subsequent round of discussions, even where it continued to be considered a concern by the agency.


Matter of: One Largo Metro LLC; Metroview Development Holdings, LLC; King Farm Associates, LLC, B-404896, June 20, 2011
[This case was the subject of a Washington Post report, which summarized the dispute as follows: "One Largo Metro, Metroview Development Holdings and King Farm Associates each filed bid protests with the GAO after the agency, which oversees development for the executive branch, awarded the lease for 1 million square feet of office space to JBG this year. JBG has held a five-year, $108 million lease on the HHS office space in Rockville while the procurement process continued. Meanwhile, the Prince George’s developers have been trying to win the HHS bid.

"The GAO ruled that the GSA’s award to Fishers Lane/JBG Cos. should be reevaluated and that three other developers who were trying to entice the U.S. Department of Health and Human Services to move from Rockville to Prince George’s County should get another chance to win the nearly $450 million, 15-year lease agreement."]

DIGEST
1. Protest is sustained where the agency failed to consider both the variety and quantity of amenities offered under the access to amenities subfactor, as required by the solicitation.
2. Protest is sustained where the head of the contracting activity did not meaningfully consider the evaluated differences in the offerors’ proposals in her selection decision.

Offerors were informed that award would be made on a “best value” basis, considering price and three technical factors: location; building characteristics; and past performance and key personnel. Under the location factor, the access to existing Metrorail subfactor was stated to be significantly more important than the access to amenities subfactor and more important than any other subfactor. The location and building characteristics factors were stated to be of equal weight, and to be each significantly more important than the past performance and key personnel factor. Price was stated to be significantly less important than the combined weight of the technical factors.

With regard to access to amenities, offerors were informed that “[o]ffers will be evaluated for amenities within the building or otherwise available” within one mile of the building’s main entrance, and that evaluations would consider “the quantity and variety of the following amenities: fitness facilities, postal facilities . . . restaurants, day care center, fast food establishments, dry cleaners, [banks and ATMs], convenience shops, card/gift shops, hair salons, automotive service stations, and drug stores.” The SFO further advised that the best rating would be given to offers that provide the greatest variety and quantity of amenities existing [if not then existig] at the time of occupancy within the building or within 1,500 wlf of the building.

[There were many other such objectively verifiable detailed, salient requirements, a point I take as a good example of the kind of detail a proper best value solicitation will contain.]

Offers were evaluated by the agency’s technical evaluation teams9 (TET), which assigned adjectival ratings under each non-price evaluation factor supported by a narrative discussion that identified the offerors’ respective strengths and weaknesses. GSA assigned a separate technical evaluation team for each non-price evaluation factor. The evaluation reports were provided to the agency’s source selection evaluation board (SSEB), which also evaluated the offerors’ revised proposals. The SSEB assigned adjectival ratings under each subfactor and for the proposals overall, but did not, at this juncture, provide an adjectival rating for the three top-level evaluation factors.

The SSEB conducted a tradeoff analysis and recommended that the lease be awarded to King Farm as reflecting the best value to the agency. The SSEB found that the proposals of Fishers Lane, One Largo, and Metroview were essentially technically equal, and that the price of Fishers Lane was lower than the prices of One Largo and Metroview. The SSEB then compared the Fishers Lane higher-rated proposal to King Farm’s lower-priced proposal. The SSEB found that the higher technical rating of the Fishers Lane proposal primarily reflected that offeror’s proposal of a building that was closer to the nearest Metrorail station. The SSEB also found, however, that King Farm had mitigated that advantage by offering a free shuttle service. The SSEB concluded that although the Fishers Lane proposal had a higher rating, the two offerors’ proposals approached “technical equality,” and the perceived benefit in the Fishers Lane proposal did not merit the additional cost to the agency.

The SSEB’s January 2011 evaluation report and award recommendation were provided to the agency’s source selection authority (SSA). The SSA was concerned with the SSEB’s rationale for its ratings of the offers and directed the SSEB to reevaluate and review its source selection recommendation.

The SSA stated, among other things, that the SSEB’s report did not indicate that the evaluation board had recognized that the location and building characteristics factors were of equal weight and that price was significantly less important than the technical factors.

Because the SSEB found that all offers were technically equal, the board again recommended that the lease be awarded to King Farm on the basis of its low price.

The SSA reviewed the SSEB’s addendum evaluation report and agreed with the board’s subfactor ratings and its recommendation to make award to King Farm. The SSA, however, disagreed with the SSEB’s conclusion that the offers were technically equal. The SSA concluded that, although the proposals were technically very close, One Largo’s offer was technically superior to the proposals of Fishers Lane, Metroview, and King Farm. In performing a tradeoff analysis, the SSA compared One Largo’s superior offer with King Farm’s low-priced, highly successful-rated offer. At the conclusion of her review, the SSA agreed with the ultimate conclusions of the SSEB, and decided that King Farm’s offer represented the best value to the government.

The SSA’s selection decision was provided to GSA’s commissioner for the National Capital Region Public Buildings Service, who also serves as the Head of the Contracting Activity (HCA) for this region. The HCA reviewed the SFO, SSP, TET reports, SSEB reports, and the SSA’s decision, and disagreed with the SSA’s conclusion that King Farm’s proposal offered the best value to the government.

While the HCA relied on the SSEB’s earlier ratings, she did not accept the SSEB’s tradeoff analysis or its recommendation for award. The HCA ranked the offers, based on the percentage of superior ratings received. The HCA stated that the Fishers Lane offer presented the best value to the government and selected Fishers Lane for award.

The HCA’s tradeoff between the proposals of One Largo and Fishers Lane did not, however, identify One Largo’s technical advantages or explain why these advantages did not justify the higher price. Id. Similarly, the HCA found that Metroview’s offer was not the best value to the government, based on Metroview’s lower rating and higher price than One Largo, but without any further explanation of the underlying merits of the proposals.

The HCA also compared the Fishers Lane offer with King Farm’s lower-priced offer, observing that the King Farm and Fishers Lane offers received the “same or similar” adjectival scores for all technical subfactors except for access to an existing Metrorail station. In reaching a tradeoff decision, the HCA acknowledged that, for this procurement, price is significantly less important than the combined weight of the technical factors, but that the importance of price increases as offers approach technical equality.

As noted above, the Fishers Lane offer was rated highly successful under the access to existing Metrorail subfactor, and King Farm’s offer was rated marginal. The HCA stated that this was the distinguishing difference between the offers, and selected the Fishers Lane higher-priced offer as the best value to the government.

these protests followed. GSA has stayed award of the lease pending our resolution of these protests.

The protesters raise numerous objections to the evaluation of offers and the HCA’s selection decision. As explained below, we sustain the protesters’ challenges to GSA’s evaluation of offers under the access to amenities subfactor, and to the agency’s source selection decision. We deny the remainder of the specific challenges

King Farm argues that GSA’s evaluation of proposals under the access to amenities subfactor was not in accordance with the SFO. Specifically, King Farm contends that offerors were advised that the agency would consider the quantity, variety, and proximity of amenities offered. Instead of considering the quantity and variety of amenities, King Farm argues that the agency only considered the number of amenity categories offered.

GSA acknowledges that the SFO provided for the evaluation of the variety, quantity, and proximity of amenities but argues that this was accomplished by assessing the number of amenity categories offered by each offeror. GSA also argues that even if their ratings under the access to amenities subfactor were improved, King Farm and Metroview were not prejudiced by GSA’s actions because this subfactor represented only [Deleted] percent of the total evaluation, and thus would not have altered the overall technical ratings or the results of the tradeoff analysis.

We find that GSA’s approach to evaluating this SFO provision was inconsistent with the terms of the provision. The SFO provided:
Offers will be evaluated for both the quantity and variety of the following amenities:.... The final evaluation will consider all of the available amenities and the offers will be scored based on the quantity, variety, hours and proximity of such amenities.... The best rating will be given to offers that provide the greatest variety and quantity of amenities....
The plain language of the SFO requires GSA to evaluate both the overall number of amenities offered as well as the number of amenity categories (i.e., the variety).

but GSA’s simple counting of categories, such as hair salons or automotive service stations, ignores the type of amenity being offered. GSA’s counting of amenity categories disregarded King Farm’s identification of three restaurants and three fast food establishments within 1,500 wlf of its building, as compared to identification by Fishers Lane of only one restaurant and four fast food establishments within 1,500 wlf. Similarly, GSA’s evaluation does not account for the fact that 7 of 18 amenities offered by Fishers Lane were automotive service stations. In short, we find that GSA’s assignment of adjectival ratings based only upon how many amenity categories were offered was not reasonable.

As set out below, all three of the protesters here raise challenges to the HCA’s selection decision. King Farm argues that the HCA failed to perform the required tradeoff analysis, and failed to articulate any rationale for paying the price premium for the Fishers Lane proposal.

One Largo, the highest-rated offeror, argues that the HCA’s recitation of offerors’ scores and prices--without additional explanation weighing the strengths and weaknesses of each proposal--was insufficient to support the HCA’s determination that the Fishers Lane proposal represented the best value to the government. One Largo complains that the HCA failed to credit One Largo for its evaluated technical superiority by looking behind its higher ratings to discern the substantive differences in the proposals. For example, the HCA failed to evaluate the true difference in distances from a Metrorail station. Fishers Lane location was more than four times farther.

Finally, Metroview argues that the HCA failed to meaningfully consider whether Metroview’s proposal, which received a higher percentage of superior ratings than the Fishers Lane proposal, merited the cost premium, based on each proposal’s strengths and weaknesses. Metroview argues that the HCA did not substantively discuss the technical strengths and weaknesses of the two proposals to determine whether they were technically equal or whether one was technically superior, but instead mechanically applied the adjectival ratings to determine technical superiority.

Moreover, Metroview asserts that the evaluation record does not provide clear support for any one proposal. In this regard, Metroview notes that the SSEB concluded in its final evaluation report that all proposals were technically equal and recommended King Farm based on its lower price; the SSA disagreed with the SSEB’s determination of technical equality and selected King Farm’s proposal after a tradeoff analysis; and the HCA disagreed with the conclusions of both the SSEB and the SSA to select the Fishers Lane proposal.

In reviewing an agency’s evaluation of proposals and source selection decision, we examine the supporting record to determine whether the decision was reasonable, consistent with the stated evaluation criteria, and adequately documented.

Although source selection officials may reasonably disagree with the ratings and recommendations of evaluators, they are nonetheless bound by the fundamental requirement that their independent judgments be reasonable, consistent with the stated evaluation scheme, and adequately documented.

In this regard, ratings, whether numerical, color, or adjectival, are merely guides for intelligent decision making.

An agency’s source selection decision cannot be based on a mechanical comparison of the offerors’ technical scores or ratings per se, but must rest upon a qualitative assessment of the underlying technical differences among competing offers.

GSA argues that the HCA reasonably exercised her discretion in determining that the Fishers Lane proposal represented the best value to the government. GSA further argues that the HCA’s review of the SSEB report and the SSA decision, which each contained a detailed discussion of the merits of each proposal, provided sufficient basis for the HCA’s selection decision. In particular, GSA notes that the HCA adopted the overall technical ratings assigned by the SSEB in its January 2011 report. GSA also contends that our prior decisions do not require agency selection officials to discuss every detail regarding the relative merit of the proposals in the selection decision document.

We recognize that while agency selection officials may rely on reports and analyses prepared by others, the ultimate selection decision reflects the selection official’s independent judgment. However, the independence granted selection officials does not equate to a grant of authority to ignore, without explanation, those who advise them on selection decisions.

HCA did not concur with the recommendations of the lower-level evaluators. Although the HCA adopted the subfactor-level adjectival ratings assigned by the SSEB, she did not adopt either the SSEB’s or the SSA’s analyses concerning the relative merits of the proposals or selection recommendations. Rather, without explaining the basis for her disagreement with the conclusions of the lower-level evaluators, the HCA proceeded to make conclusory pronouncements concerning which proposal offered the best value to the government.

We find from our review of the record no evidence of any meaningful consideration by the HCA of the evaluated differences in the firms’ offers. Rather, the HCA’s tradeoff assessment was based upon a mechanical comparison of the percentage of superior and highly successful ratings assigned to each offer. Where, as here, a solicitation provides for award on a best value basis, the decision as to the relative technical merit of the offers must be based upon a comparative consideration of the technical differences of the proposals.

As noted above, the SSEB documented a number of differences between the offerors’ proposals, which would appear to provide discriminators for a determination of the relative technical merit of the offers. For example, under the most important subfactor, access to existing Metrorail, the offerors’ proposed buildings were at differing distances from a Metrorail station. Also, King Farm, which offered a building at the greatest distance from a Metrorail station, proposed a shuttle service plan to mitigate that weakness. Similarly, under the planning efficiency and flexibility subfactor, the SSEB noted a number of differing strengths and weaknesses in the offerors’ proposed building layouts.

In the absence of a documented, meaningful consideration of the technical differences between the offerors’ proposals, the HCA could not perform a reasonable tradeoff analysis. That is, the HCA had no basis to determine that the Fishers Lane higher-priced proposal outweighed the cost savings offered by the King Farm lower-rated, but lower-priced offer. Similarly, the HCA had no basis to conclude that the Fishers Lane proposal was more advantageous than the proposals of One Largo and Metroview. Accordingly, we sustain the protesters’ challenge to GSA’s selection of the Fishers Lane offer as the best value to the government.

We recommend that GSA reevaluate offers under the access to amenities subfactor in accordance with the terms of the SFO and perform and document a new selection decision consistent with our decision.

We also recommend that the protesters be reimbursed their reasonable costs of filing and pursuing the protest, including attorneys’ fees.


Matter of: Mission Essential Personnel, LLC, B-404218.2, June 14, 2011
Protest of agency evaluation is sustained where record reflects that agency failed to consider one of evaluation factors established by terms of solicitation.

The RFQ, which was issued on April 18, 2010, contemplated issuance of an order for a fixed-price level-of-effort labor contract with a 1-year base period and four 1-year options to the FSS contract holder whose quotation represented the best value to the government.

Best value was to be determined based on a consideration of price and several non-price factors, with the non-price factors being given greater importance. The non-price evaluation factors specified in the RFQ were: (1) management plan; (2) quality control plan; (3) transition plan; (4) resumes; (5) past performance risk; (6) small business subcontracting plan; and (7) facility clearance (which was to be rated on a pass/fail basis).

The RFQ established the management plan factor as being the most important factor; the quality control plan factor was the second most important factor; the transition plan and resumes factors were of equal weight, and ranked third in terms of importance; and the past performance risk and small business subcontract plan factors were of lesser importance.

Under all of the foregoing factors except past performance risk, quotations were to be rated as exceptional, very good, acceptable, or unacceptable. Under the past performance risk factor, performance risk was to be rated as low, moderate, high, or neutral. Vendors’ proposed prices were to be evaluated for reasonableness and realism.

As it relates to the protest, the solicitation provided that under the management plan factor, the agency would consider the vendor’s reporting mechanisms; the relevant experience of its proposed management team “in relation to the scope and context of the Statement of Work [(SOW)]”; the proposed continuing education, professional development, and retraining opportunities for employees; and the vendor’s experience in hiring and retaining qualified personnel.

Regarding the resumes factor, the RFQ required vendors to submit resumes for certain key personnel (i.e., floor manager, workflow manager, workflow coordinator, outsource coordinator, and database operations coordinator). The RFQ further provided that the resumes would be evaluated in accordance with the following standard: Each resume will be evaluated against the requirements in the SOW and will receive its own rating from the Factors above. Resumes failing to meet the minimum requirements will be rated as Unacceptable. The overall evaluation for the Resume factor per company will be the average of each panel member’s rating for each resume.

With regard to the resumes factor, specifically, the SSP memorandum reflects the identical entry for MEP and SAIC, indicating that they were without identifying “advantages” or “disadvantages.”

MEP takes issue with multiple aspects of the agency’s evaluation, arguing, among other things, that the agency failed to evaluate vendors’ quotations under the resumes factor as contemplated by the RFQ. We sustain MEP’s protest on this issue.
When an agency conducts a formal competition under the FSS program for award of a task order contract, we will review the agency’s actions to ensure that the evaluation was reasonable and consistent with the terms of the solicitation.
MEP argues that the agency did not evaluate vendors’ quotations under the resumes factor as provided for by the terms of the RFQ.

The agency concedes that it did not evaluate the resumes in the manner described by the RFQ. Instead, the agency explains that due to an “administrative oversight,” the evaluators were given an incorrect evaluation standard for the resumes factor. That is, rather than being advised of the above standard, the evaluators were instructed simply to verify that the vendors had furnished resumes for the key personnel positions.

The fact that the evaluators did not further evaluate vendors’ quotations with respect to the resumes factor is further confirmed by the fact each evaluator worksheet for this factor is completely blank, with no documentation of any evaluation or assignment of relative strengths or weaknesses.

Applying this instruction, the evaluators essentially reviewed the resumes factor on a pass/fail basis--i.e., they merely determined whether vendors had provided resumes for their key personnel and did not further review the resumes to determine any strengths or weaknesses for the purpose of determining a rating under the resumes factor.

Notwithstanding this clear deviation from the evaluation criteria established by the RFQ, the agency attempts to excuse its admitted error by suggesting that it effectively considered the qualifications of the vendors’ key personnel under the management plan factor, which provided for consideration of the relevant experience of the proposed management team in relation to the SOW.

The agency’s analysis conflates two evaluation factors that the RFQ established as separate and distinct from one another, and, in so doing, undermines the of the resumes factor. By considering the resumes factor as subsumed under the management plan factor, rather than assigning it the separate adjectival rating and weight provided for in the RFQ, the agency conducted its evaluation in a manner that was contrary to the evaluation scheme expressly established by the RFQ.significance.

Moreover, the single management plan factor standard upon which the agency relies was qualitatively different from the evaluation contemplated under the resumes factor. Specifically, the relevant management plan standard provided for a general assessment of the relevant experience of the vendors’ key personnel “in relation to the scope and context” of the SOW, whereas under the resumes factor, evaluators were specifically to rate resumes “against the requirements in the SOW.” In this regard, the SOW established specific minimum qualification requirements, as well as highly desired skills and proficiencies, which do not necessarily translate to an evaluation based solely on experience.

Finally, the record reflects that the agency’s failure to adhere to the evaluation scheme set forth in the RFQ resulted in the failure of the agency to recognize that a resume submitted by SAIC for a key position did not in fact identify the individual as possessing the appropriate minimum security clearance. Given the agency’s failure to evaluate vendors’ quotations according to the ground rules established by the RFQ, MEP’s protest of the agency’s evaluation under the resumes factor is sustained.

No comments: