MODIFICATION
99 -- Questions & Answers
- Notice Date
- 7/23/2012
- Notice Type
- Modification/Amendment
- Contracting Office
- Department of the Treasury, Bureau of the Public Debt (BPD), Division of Procurement, Avery 5F, 200 Third Street, Parkersburg, West Virginia, 26106-5312, United States
- ZIP Code
- 26106-5312
- Solicitation Number
- RFI-OFR-12-0102
- Archive Date
- 8/22/2012
- Point of Contact
- Lisa Stanley, Phone: 304-480-7213
- E-Mail Address
-
psb3@bpd.treas.gov
(psb3@bpd.treas.gov)
- Small Business Set-Aside
- N/A
- Description
- RFI Page 2, Purpose and Background The RFI states that the proposed OFR big data architecture will host a "large number of datasets, some of which are very large". Please clarify the initial startup capacity required in year 1 implementation of this big data environment for OFR. Also, please provide additional details on the capacity growth expected for the proposed big data cluster. How large does OFR expect this big data environment to be in three years? Five years? A: New data sources are identified continuously; that the number of datasets will be in the hundreds. Current data feeds are less than 100 GB size; however data feeds in the Terabyte size range will be acquired soon. Our estimates indicate a data "pool" in the Petabyte range within 18 months and up to 20 Petabytes in the next 3-5 years Page 3 Problem Description & Current Environment The RFI describes OFR's current computing/storage environment virtual desktops, relational database and Linux environments. Please provide additional details on the compute servers and storage used by OFR today for "traditional data management technologies and capabilities". Please clarify what relational database and ETL tool(s) are currently used. A: Our compute environment is currently made up primarily of IBM blade servers with a VMax SAN for tier 1 storage and a VNX for tier 2-3 storage. We run MS SQL clusters and use SSIS for ETL. For OS we run Windows 2008 servers and RHEL 6.0 servers, VMWare for hypervisor and XenDesktop with Windows 7 for VDI. RFI Page 4 Problem Description and Page 6, Objective 3 The RFI indicates that OFR wants a solution to "achieve fast and flexible ingest" and needs to on board large data sets (i.e. under five days for large data sets but preferably sooner). Please specify any other ingest rate requirement or metric needed to meet OFR requirements in the initial deployment or final deployment phases of this program. A: As previously stated, new data sources are identified continuously, we have ingest, cleanse and tag the data and make it available to the researchers and analysts ASAP; our SLO for very large data sets is a maximum of 5 days from the time the data is received to the time is available to the researchers. Large datasets are those larger than 20 TB, anything less than 20 TB should be available sooner than 5 days. RFI Page 10, Paragraph 2.4.3 Backup and recovery The RFI asks about backup and check pointing, archival, retrieval and disaster recovery. How long does OFR need to retain historical data? A: Not defined yet, should plan for 5 year retention minimum. RFI page 6, Paragraph 5 and Page 10, Paragraph 2.4.4., Unauthorized Access The RFI describes how OFR requires a solution that can collect, store analyze and protect sensitive information, "such as transactions and positions of financial entities." Please clarify how important encryption of data storage devices is for OFR's solution? A: As stewards of confidential and proprietary data, we have to make sure access to the data is closely guarded, monitored and logged. We require a solution that will allow granular control, continuous monitoring, recording and reporting of data access events. We need a solution that also restricts visibility to specific portions of the data; only specific individuals within the OFR will have access to all the data; everyone else will have restricted access based on role and responsibilities. Encryption while at rest is very important. RFI Page 5, Current Environment and Objective 1 What Data Analytics solutions (vendor and product) does OFR currently have in the environment that would tie into this big data solution? A. Current analytical and associated tools include; Matlab, SAS, Mathematica, R, Phyton, Eclipse, Stata, etc. New tool sets will be added as required. RFI Page 5, Current Environment Does OFR currently use MapReduce technologies? If so what is the current distribution or technology used? A: We do not currently have a map reduce technology. RFI Page 5, Current Environment What technologies are currently used for OFR's relational database infrastructure? A: Please see Q and A # 2 RFI Page 5, Current Environment and Objective 2 The RFI mentions "traditional data warehouse environment" with which the proposed big data solution must interact. What technologies are currently used for any Data Warehouse currently deployed? A: We do not currently have a Data Warehouse solution. RFI Page 7, Requirement 2.1.2 In memory Analytics The RFI requires in memory analytics for the proposed future solution. What technology is being used for any current In Memory Analytics being performed? A:Our current datasets are small enough to process on traditional commodity hardware with robust memory configurations; as datasets grow we will require a MPP and in memory processing. RFI page 8, Requirement 2.1.6 Hosting and Sharing, and page 5, Current Environment, The RFI indicates that the current environment uses a Treasury datacenter, and asks about hosting options. Please identify where the current analytics environment is hosted and to what extent the OFR plans to leverage commercial (non-Government, non-Treasury) datacenters for this requirement. A: The entire infrastructure is currently hosted in the Main Treasury HQ datacenter, we are interested in learning about all available options, and no plans have been made. RFI Page 8, Paragraph 2.2.1 Incremental Scaling What quantitative metrics does OFR have to meet with respect to the maximum values for number of datasets and size of datasets for this program? A: Please see Q&A number 1 and number 3 Regarding Objectives, "2. Provide architecture that will elegantly scale to OFR data volumes". Are there some Service Level Agreements on data freshness. Like updated every 1 hour. A: Hundreds of data sources will be fed into the environment, frequency of each data feed will vary from several times a day to once a week, month, quarter, and year. Depending on each data set availability we expect it to be processed and refreshed ASAP. Regarding Objectives, "4. Provide a solution which is able to track the provenance or source data". Does the current architecture already use any tool/system for managing this data lineage? A: No tools/systems are used today Tools, including Open Source tools: In the RFI, Objectives section, Treasury States "The OFR intends to maintain their current analytic tools while expanding to an undefined number of tools." a. Can Treasury provide a list of currently approved commercial and open source tools that can be used in the solution, such as from an approved Enterprise Architecture list? A: This is a new solution to the Treasury and no official approved list is available; all tools not currently in use by other Federal Government Agencies will be evaluated by the Treasury Cyber Security. b. Can Treasury provide guidance on the standards and/or characteristics of open source tools that can be used? A: No we cannot. c. The RFI states that the OFR intends to expand their current analytic tool suite with "an undefined number of tools," "including specialized visualization tools." Is developing or recommending analytic tools and analyst training for the additional analytic tools expected to be part of the required contractor service? A: Vendors should recommend analytic tools and analyst training, and explain why these tools are recommended (including both pros and cons for each tool). Data types and sizes: In the RFI, Problem Description section, Treasury states: "The OFR data processing, storage, and analysis needs are expected to increase exponentially over the coming 1-5 years, overwhelming the scalability of the traditional analytic environment. Data capacity is expected to reach multiple petabytes. Data sources will be heterogeneous and will grow to hundreds sources and the OFR may maintain exponentially more derived data sets. The OFR therefore anticipates the need for a big data solution." And further "The OFR will receive data from the FSOC agencies; commercial data collection services (such as Bloomberg, Moody's, and Standard and Poor's); government and non-government regulators; stock and options exchanges; clearing houses; financial institutions; and, international agencies and regulators. Some of the analyses desired will require a wide range of data and complex analysis techniques that are prohibitively burdensome and complex using traditional data table methods." d. Can Treasury provide an estimate of the amount of data expected to be used in the solution by year? A: New data sources are identified continuously; that the number of datasets will be in the hundreds. Current data feeds are less than 100 GB size; however data feeds in the Terabyte size range will be acquired soon. Our estimates indicate a data "pool" in the Petabyte range within 18 months and up to 20 Petabytes in the next 3-5 years Can Treasury describe the types of data that will be required to be supported? A: Data sources and format will be heterogeneous and will grow to hundreds of sources. Structured and un-structured data will be collected. Can Treasury provide estimated file sizes of data, by type expected to be used? A: Not at this time. Is there a predefined data lifecycle or process to differentiate new data from historic data or high priority data from low priority? A: No Is there a known or an expected time period that the source data and associated derived data sets will be active and historical/archived? A: Not at this time. Do the data sets need to be available for collaboration during the historical/archive phase as well as while active? A: Yes How long does the OFR expect to store source and derived data? A: Not yet defined. Can Treasury clarify if some external data required for analysis will be accessed remotely or will all data be ingested into the Treasury solution? A: Both. Can Treasury describe their expectations for in-memory analytics given the anticipated dataset sizes? A: Answers to some of the queries in a traditional data warehouse would take too long, as stated in the RFI "The OFR data processing, storage, and analysis needs are expected to increase exponentially over the coming 1-5 years, overwhelming the scalability of the traditional analytic environment. Some of the analyses desired will require a wide range of data and complex analysis techniques that are prohibitively burdensome and complex using traditional data table methods." Infrastructure: In the RFI Objectives section, Treasury states: "Provide a solution which is achievable within 18 months." Can Treasury provide guidance on the use of Gov Cloud Services and their acceptability for the OFR Solution? For example, does Treasury expect to be able to use Amazon Gov Cloud or other GSA approved offering for the OFR solution? A: We are interested in assessing all options that are cost effective and that meet the mission requirements and Treasury security posture, solutions must conform to FISMA compliance requirements. Does OFR want the solution to be hosted at the existing Department of Treasury data center? If so, could you provide information on the current infrastructure? A: The solution will be built from the ground up, no current Infrastructure exists for this purpose Data Security: In reference to RFI Section 2.4 Can Treasury provide the FISMA certification level (Low, Moderate, High) required for this solution? A: Moderate/High The RFI mentions requirements for collaboration, document management, remote access, and access by non-government personnel. Does OFR have an existing identity management infrastructure robust enough to leverage for these requirements, or do we need to propose that as well? A: Current solution is robust and we do not require a proposal for anything other than the Big Data Are there forensics requirements with respect to preserving original source data? A: Yes, all source data has to be preserved in its original format. Has the OFR previously invited "other government and non-government personnel to participate in analyses using various OFR data sets"? If not, is the process for this type of collaboration part of the required service? A: No Is training for the analysts and system administrators part of the required services? A: Where applicable and needed, if technology is new and a skill gap is identified. RFI Section 2.3.7 The second bullet is incomplete. Can Treasury clarify? A: Not every solution may be capable of delivering the end to end functionality required, if your solution meets part of the requirement, tell us what portion and how it works natively or integrates with other solutions to deliver the end to end functionality. Problem Description paragraph 2 page 3 Question: Will a storage demand model be included as part of the RFP? A: We can no comment on anything outside the RFI Question: Do you anticipate the need for multiple tiers of performance and availability for the data? A: Yes Problem Description paragraph 3 page 3 Question: What tools are currently used for collaboration, document management, and publishing? A: Mostly Microsoft SharePoint 2010 Question: Will the contractor be required to integrate these tools? A: NO Problem Description paragraph 8 page 4 Question: Do you have any existing taxonomies or ontologies? If so, would you share them? A: NO Objective 5 Provide a solution which minimizes risk to the security of the data entrusted to the OFR Question: Are there specific SLA for High Availability? A: Not yet defined. Section 2.1.6 Hosting and sharing Question: Are you looking at the contractor providing Analytics as a Service? A: NO 2.2.1 Incremental Scaling Question: Would you provide the details on volume of the current data feeds (size and number) and the anticipated growth for the next 4 years? A: New data sources are identified continuously; that the number of datasets will be in the hundreds. Current data feeds are less than 100 GB size; however data feeds in the Terabyte size range will be acquired soon. Our estimates indicate a data "pool" in the Petabyte range within 18 months and up to 20 Petabytes in the next 3-5 years Question: Would you provide more details on the type and volume of data that can be done in FISMA Moderate vs FISMA High? A: Not at this time, depends on solution access controls, auditing, security and monitoring capability. Section 2.3 Data On-Boarding Process Question: Would you provide the details on the variety of the current data feeds (structured vs unstructured) and the anticipated growth for the next 4 years? A: Not at this time, not all data sources have been identified. New data sources are identified continuously; that the number of datasets will be in the hundreds. Current data feeds are less than 100 GB size; however data feeds in the Terabyte size range will be acquired soon. Our estimates indicate a data "pool" in the Petabyte range within 18 months and up to 20 Petabytes in the next 3-5 years Question: How often is new data ingested? (Continuous, Daily, Weekly, Monthly, Quarterly) A: Hundreds of data sources will be fed into the environment, frequency of each data feed will vary from several times a day to once a week, month, quarter, and year. Depending on each data set availability we expect it to be processed and refreshed ASAP. Question: Are there specific SLAs for data ingest? A: Largest Datasets should be ingested and made available to researchers in a maximum of 5 days from receipt of the data; small datasets will be same day. Question: Are there specific SLAs for queries and reports? A: No Section 2.3.2 Extract, Transform, Load (ETL) Question: Do you have currently use any ETL tools? If so which tools? Do you have an enterprise license for these tools and will the contractor be able to use them? A: We currently use SSIS and yes it can be used by contractors. Other ETL tools will be evaluated as part of the RFI Section 2.3.7 Customization and third party integration Question: Do you have currently use any MDM tools? If so which one? Do you have an enterprise license and will the contractor be able to use them? A: Not at this time, we will evaluate tools as part of the RFI Section 2.4.3 Backup and recovery Question: What are the COOP and DR requirements? A: In case of a major event that disables the primary datacenter, OFR should be able to continue operating at an alternate facility. Does OFR anticipate the big data, advanced data integration and analysis platform, which is the subject of this RFI, to be established in the Department of Treasury Data Center (alongside) the existing analytic capability or does OFR expect the vendor to provide a hosted capability to support the anticipated big data, advanced data integration and analysis platform? A: We will consider all options. Objective 1 states, "The solution must demonstrate how the architecture is optimized to facilitate the performance of the OFR current and future tool sets and technologies." In order to demonstrate performance optimization against the current tool sets and technologies, the existing Analysis platform must be understood. Can the government provide architectural diagrams and details of the existing Analysis Platform? A: Not at this time What are your current data volumes and can you provide an upper bound estimate of data growth over next 5 years? A: 20 Petabytes What is the anticipated frequency of data updates from the various feeds? hourly? daily? weekly? streaming? A: All of the above What will be the anticipated and/or desired structure of data coming into the architecture (structured, unstructured, XML, spatial)? A: We expect hundreds of data sources and we will receive data in all formats mentioned above. What is the RDBMS platform for the EDW and what is the nature of the workload being supported (analytic, mixed, transactional)? A: Analytic What sort of data retention requirements are required as data volumes continue to grow? A: Not defined although 5 year retention should be considered for planning purposes Could further clarification around the interest in in-memory analytics? We ask if the motivation is more for improved system performance or if there would be interest in real-time analytics done on data as it is coming into the architecture? A; Mostly improved system performance. Is there a desire to move to a cloud/open source environment or is there flexibility to work with any MPP technology along the lines of an MPP appliance? A: Will consider all options as long as the security requirements are met.   Is the archiving solution going to support current source infrastructure also? If yes, Please provide the type and versions of relational databases used in current environment? A: Yes, current solution is MS SQL 2008 Are the data masking capabilities required for unstructured data also? If yes, what kind of files need to be masked/redacted? A: PII data needs to be redacted or masked from all data sources In the RFP you state that the OFR intends to scale and maintain its existing traditional analytic environment while enhancing capabilities through the addition of the big data environment. Will you please explain what you mean by "maintain" and "traditional"? A: Analytical tools in used by our researchers and analysts (SAS, MatLab, Mathemathica, Stata, R, etc. ) should be supported by the big data solution Do you have expectations for the performance of the analytics meaning do your analysts need an architecture that supports linear query response times without IT involvement in the traditional fashion meaning pre-aggregation and tuning? A: yes Has OFR considered the possibility of near-real time access meaning that within seconds of the data changing in the source systems, complex in-memory analytics (R Models for example) are free for analysts to run so that answers are completely current? A: Yes In the RFP you state that the OFR intends to scale and maintain its existing traditional analytic environment while enhancing capabilities through the addition of the big data environment. Will OFR be open to expanding the suite of analytical tools if the Big Data solution offers analytical capabilities that deliver the needed outcomes and use cases that would also obviate the need for one or more of the existing tools to achieve optimization and a simplified architecture? A: although new tools for better performance and capabilities will be considered, we expect current tools to still work in the proposed solution. In the RFP, you state that you are expecting data capacity to reach multiple petabytes and that your data sources will grow to hundreds sources. Can you clarify the number and size of your current data sets and when and/or how you anticipate this growth (i.e. departmental mergers, purchased data, # of new sources monthly? Yearly?). Note: this will help us in answering this RFI but will also help us with sizing and pricing. Also as far as the types of data, can you provide relative ratios of the types of data meaning structured verses unstructured and within unstructured words verses video/images? A: New data sources are identified continuously; that the number of datasets will be in the hundreds. Current data feeds are less than 100 GB size; however data feeds in the Terabyte size range will be acquired soon. Our estimates indicate a data "pool" in the Petabyte range within 18 months and up to 20 Petabytes in the next 3-5 years. Ratios are not yet known. Will these data sources be primarily US based or International? Will these source systems be solely within OFR? Or will you extend to other Federal Agencies? Will there be users outside of OFR? A: All of the above Are you currently using any ETL software to load your existing analytical solution? Regardless, if the big data solution is optimized to load all sources with other ETL software that is bundled in with the Bog Data solution, will that be acceptable to OFR? A: Yes, Question 2.3.7 - Can you please clarify this question...what is the OFR special needs in these areas (data registry, master data management, metadata repositories, and data cleaning)? Do you currently have any tools in-house that support these areas? If so, is it your intention to replace these as part of this Data Analytical solution? A: We do not currently have any tools and we expect them to be part of the RFI solution For the potential procurement, do you already have a contract vehicle that you favor or intend to use? A: We will consider all vehicles. Has OFR assessed the impacts of having the choice of hardware vendors (i.e. Dell, HP, IBM, etc.) verses being locked into a single vendor for software and hardware? A: Yes Will OFR be considering the costs of development, test, and other environments in addition to the production environment? A: Yes What % of the OFR analysts want to use SAS in contrast to R? A: This information is not available. Does OFR have the requirement to ingest, store, analyze, fuse, or consume data an incremental / near real time basis or will the system be strictly batch oriented? A: We may get some incremental / near real time basis, however most data will be batched. Can you provide additional insight into the major types of data and analysis being ingested? i.e. Will the system need to work with image, video, audio, object data? A: Financial Data, no audio or video Should architectures and pricing address High Availability? How many data centers should be considered? A: 2 DCs (primary and DR) Can OFR provide additional detail into their current BI / Reporting, Integration / ESB, and ETL solution(s)? A: Current solution is all MS SQL based, MS SSIS and MS BI Will OFR consider structuring the RFP such that responders provide a base price to meet core requirements and optional packages to meet additional recommendations? A; No comment on possible RFP, however we are interested in vendors' opinions on how awards can be structured to provide the best value to the government. Can OFR provide additional insight into their current system, virtualization, storage provisioning and operational environment and processes? Are there any planned. A: Our compute environment is currently made up primarily of IBM blade servers with a VMax SAN for tier 1 storage and a VNX for tier 2-3 storage. We run MS SQL clusters and use SSIS for ETL. For OS we run Windows 2008 servers and RHEL 6.0 servers, VMWare for hypervisor and XenDesktop with Windows 7 for VDI. Can OFR provide detail on what levels of data ingestion, analysis, organization it requires to consider a new data type on-boarded? A: No Can OFR provide additional requirements / detail on the data provenance requirements? i.e. if a report contains an aggregate value (say average) of a common field across 5 data types that each contained 1 billion rows each, what level of detail would OFR require? A: We need to identify the original source of all datasets that make up the aggregate   The document provides an excellent high level view of OFR's needs / technology but any proposed architecture and plan will need to be refined based on additional levels of detail. Does OFR plan to perform additional analysis before implementation? If so, does OFR plan on performing this analysis on a separate engagement from the implementation or as a combined project? A: We will perform additional analysis Is there any requirement for Natural Language Processing (NLP) for sentiment analysis? Will this be in the requirements? A: NO What will be the total estimated size of the government's data at the start of the solution implementation? What is the estimated growth rate for onboarding "hundreds" of data sources (i.e., 20% growth each year for 5 years)? A: New data sources are identified continuously; that the number of datasets will be in the hundreds. Current data feeds are less than 100 GB size; however data feeds in the Terabyte size range will be acquired soon. Our estimates indicate a data "pool" in the Petabyte range within 18 months and up to 20 Petabytes in the next 3-5 years What is the government's estimated data volumes over 5 years? 10, 25, 50 petabytes? A: 20+ How frequently would archived data be accessed (once moved onto archived storage/file systems)? A: As often as required. What industry standard or non-standard data formats will the government require for ingesting data sources into the solution? A: No requirements have been established 2.1.5 states "Describe how data is retrieved both by ingest time and event time." Does the government define "event time" to mean the storage and display of a transaction or event record with the actual time it occurred in the financial markets? Will the government please elaborate on this item? A: Event time is defined as "storage and display of a transaction or event record with the actual time it occurred in the financial markets" Are 'end users' referenced by 2.1.5 in addition to the 50-60 analyst and research user or are they the same? A: Users of the analytical environment will be the researchers and analysts What user types or roles must the data analytic solution support to meet the government requirements (e.g., application, system, Council member, executive user, statistical analyst, financial analyst, data scientist, other)? A: All of the above What is the governments expected scope of tasks to be accomplished in "under 5 days" for onboarding new data sets? A: As a new dataset is acquired, it will be ingested and made available to the researchers within 5 days of receipt of the data. What are the use case scenarios where the government would require in-memory analytics? What in-memory capabilities are used by the government today? A:Our current datasets are small enough to process on traditional commodity hardware with robust memory configurations; as datasets grow we will require a MPP and in memory processing for performance. What types of access to data does the solution need to support? Ex. traditional SQL, real-time (or API-based used by applications), etc. A: All of the above Will this RFP be listed as GSA Solicitation? And if so, what SIN number(s) will be associated with this RFP? A. The government has not yet determined how this requirement will be solicited. Is this endeavor in support of another (larger) project or are you standing up a new program? A: We are standing up a new program. Have you had the opportunity to have a technical demonstration of product capability by any vendors? If so, which are you considering (if any)? A: We are considering all viable options and solutions at this time. We recognize no single vendor may be able to provide a complete solution, so feel free to tell us how your products/services can meet some of our requirements. Do you have big data in-house capabilities A: We do not currently have big data capability What are you trying to accomplish with this procurement and do you have success elements defined? A: Success elements will be defined when we enter formal procurement stage. At this time we are seeking additional information to validate our previous analysis and to assist in further planning and strategy. Which contract vehicle are you favoring to procure these services? A: We will consider all vehicles. Do you have a timeframe for beginning work? A: If beginning work is meant to be "initiating procurements", then we do not have a formal timeline established as of yet. Are you focused on a particular technology and/or vendor for this solution? A: Although we know that Hadoop plays a major role in almost every solution we have seen; we have not focused on any single technology/vendor in particular and are open to any solution or combination of solutions from different vendors. What are your in-house capabilities when it comes to this technology? A: OFR does not have a big data solution or capability at this time. Is the OFR looking for a software only solution or a fully integrated software/hardware solution? A: We are open to either solution. Can the OFR elaborate on the specific security requirements A: As stewards of confidential and proprietary data, we have to make sure access to the data is closely guarded, monitored and logged. We require a solution that will allow granular control, continuous monitoring, recording and reporting of data access events. We need a solution that also restricts visibility to specific portions of the data; only specific individuals within the OFR will have access to all the data; everyone else will have restricted access based on role and responsibilities. Can the OFR provide details regarding the existing analytical environment? A: The current environment consists of a MS SQL cluster to fulfill short term rational database requirements. We offer both a windows VDI and linux server based compute capability to our researchers and analysts. Our environment is accessible via the Treasury remote access facility. Current analytical and associated tools include; Matlab, SAS, Mathematica, R, Phyton, Eclipse, Stata, etc. New tool sets will be added as required. Can the OFR provide any metrics regarding the velocity & volume of additional data? A: New datasets are continually identified. While we don't have very firm estimates, for the purpose of this exercise we are estimating that 10-20 Petabytes of data will be collected over the next 36 months and we expect daily feeds up to 10 TB per day. We are required to ingest the data and make it available to the researchers within a maximum of 5 days from the collection (new data set never seen before). What format(s) is used by the FSOC agencies when sending data to the OFR? A: You should assume data exchanges use a variety of formats. Will the existing analytical environment continue to be responsible for the presentation of analyzed sets? A: Please clarify your question. What are the typical workflow use cases in the government's existing traditional analytic environment? A: Our existing analytic environment is "traditional" and as such it can only perform the "traditional" workflows, and will not work in the required solution as it can't scale. Therefore, details concerning typical workflow are irrelevant. What workflow capabilities are required from the new data analytics solution? A: We cannot answer without additional detail; the question is too broad and open to interpretation. What use cases scenarios would the government require support for real-time data and analysis or real-time event processing in the solution? A: Specific case scenarios can't be provided, however, it can be assumed that there is no need for real-time data processing.
- Web Link
-
FBO.gov Permalink
(https://www.fbo.gov/spg/TREAS/BPD/DP/RFI-OFR-12-0102/listing.html)
- Record
- SN02811629-W 20120725/120723235130-afc7b3ee0484bc54f450594498a1d490 (fbodaily.com)
- Source
-
FedBizOpps Link to This Notice
(may not be valid after Archive Date)
| FSG Index | This Issue's Index | Today's FBO Daily Index Page |