data lake

Results 1 - 25 of 50Sort Results By: Published Date | Title | Company Name
Published By: Oracle     Published Date: Jan 16, 2018
Download this webinar to gain insight on the Data Lake. Learn about the definitions and drivers as well as barriers to Data Lake Success, and Cloud Object Storage.
Tags : 
    
Oracle
Published By: IBM APAC     Published Date: Jul 09, 2017
Organizations today collect a tremendous amount of data and are bolstering their analytics capabilities to generate new, data-driven insights from this expanding resource. To make the most of growing data volumes, they need to provide rapid access to data across the enterprise. At the same time, they need efficient and workable ways to store and manage data over the long term. A governed data lake approach offers an opportunity to manage these challenges. Download this white paper to find out more.
Tags : 
data lake, big data, analytics
    
IBM APAC
Published By: IBM APAC     Published Date: Jul 09, 2017
This Knowledge Brief investigates the impact of a data lake maintained in a cloud or hybrid infrastructure.
Tags : 
data lake, cloud, hybrid
    
IBM APAC
Published By: Amazon Web Services     Published Date: Jul 25, 2018
What is a Data Lake? Today’s organizations are tasked with managing multiple data types, coming from a wide variety of sources. Faced with massive volumes and heterogeneous types of data, organizations are finding that in order to deliver insights in a timely manner, they need a data storage and analytics solution that offers more agility and flexibility than traditional data management systems. Data Lakes are a new and increasingly popular way to store and analyze data that addresses many of these challenges. A Data Lakes allows an organization to store all of their data, structured and unstructured, in one, centralized repository. Since data can be stored as-is, there is no need to convert it to a predefined schema and you no longer need to know what questions you want to ask of your data beforehand. Download to find out more now.
Tags : 
    
Amazon Web Services
Published By: Amazon Web Services     Published Date: Jul 25, 2018
Organizations are collecting and analyzing increasing amounts of data making it difficult for traditional on-premises solutions for data storage, data management, and analytics to keep pace. Amazon S3 and Amazon Glacier provide an ideal storage solution for data lakes. They provide options such as a breadth and depth of integration with traditional big data analytics tools as well as innovative query-in-place analytics tools that help you eliminate costly and complex extract, transform, and load processes. This guide explains each of these options and provides best practices for building your Amazon S3-based data lake.
Tags : 
    
Amazon Web Services
Published By: Amazon Web Services     Published Date: Jul 25, 2018
Defining the Data Lake “Big data” is an idea as much as a particular methodology or technology, yet it’s an idea that is enabling powerful insights, faster and better decisions, and even business transformations across many industries. In general, big data can be characterized as an approach to extracting insights from very large quantities of structured and unstructured data from varied sources at a speed that is immediate (enough) for the particular analytics use case.
Tags : 
    
Amazon Web Services
Published By: Teradata     Published Date: Jan 30, 2015
Our goal is to share best practices so you can understand how designing a data lake strategy can enhance and amplify existing investments and create new forms of business value.
Tags : 
data lake, data warehouse, enterprise data, migration, enterprise use, data lake strategy, business value, data management, data center
    
Teradata
Published By: RedPoint Global     Published Date: May 11, 2017
While they’re intensifying, business-data challenges aren’t new. Companies have tried several strategies in their attempt to harness the power of data in ways that are feasible and effective. The best data analyses and game-changing insights will never happen without the right data in the right place at the right time. That’s why data preparation is a non-negotiable must for any successful customer-engagement initiative. The fact is, you can’t simply load data from multiple sources and expect it to make sense. This white paper examines the shortcomings of traditional approaches such as data warehouses/data lakes and explores the power of connected data.
Tags : 
customer engagement, marketing data, marketing data analytics, customer data platform
    
RedPoint Global
Published By: Attunity     Published Date: Nov 15, 2018
Change data capture (CDC) technology can modernize your data and analytics environment with scalable, efficient and real-time data replication that does not impact production systems. To realize these benefits, enterprises need to understand how this critical technology works, why it’s needed, and what their Fortune 500 peers have learned from their CDC implementations. This book serves as a practical guide for enterprise architects, data managers and CIOs as they enable modern data lake, streaming and cloud architectures with CDC. Read this book to understand: ? The rise of data lake, streaming and cloud platforms ? How CDC works and enables these architectures ? Case studies of leading-edge enterprises ? Planning and implementation approaches
Tags : 
    
Attunity
Published By: Attunity     Published Date: Nov 15, 2018
IT departments today face serious data integration hurdles when adopting and managing a Hadoop-based data lake. Many lack the ETL and Hadoop coding skills required to replicate data across these large environments. In this whitepaper, learn how you can provide automated Data Lake pipelines that accelerate and streamline your data lake ingestion efforts, enabling IT to deliver more data, ready for agile analytics, to the business.
Tags : 
    
Attunity
Published By: Paxata     Published Date: Nov 14, 2018
This eBook provides a step-by-step best practices guide for creating successful data lakes.
Tags : 
data lakes, governance, monetization
    
Paxata
Published By: Amazon Web Services     Published Date: Oct 09, 2017
Today’s organizations are tasked with managing multiple data types, coming from a wide variety of sources. Faced with massive volumes and heterogeneous types of data, organizations are finding that in order to deliver insights in a timely manner, they need a data storage and analytics solution that offers more agility and flexibility than traditional data management systems. Data Lakes are a new and increasingly popular way to store and analyse data that addresses many of these challenges. Data Lakes allow an organization to store all of their data, structured and unstructured, in one, centralized repository.
Tags : 
cost effective, data storage, data collection, security, compliance, platform, big data, it resources
    
Amazon Web Services
Published By: Teradata     Published Date: May 02, 2017
Kylo overcomes common challenges of capturing and processing big data. It lets businesses easily configure and monitor data flows in and through the data lake so users have constant access to high-quality data. It also enhances data profiling while offering self-service and data wrangling capabilities.
Tags : 
cost reduction, data efficiency, data security, data integration, financial services, data discovery, data accessibility, data comprehension
    
Teradata
Published By: IBM     Published Date: Jan 27, 2017
Companies today increasingly look for ways to house multiple disparate forms forms of data under the same roof, maintaining original integrity and attributes. Enter the Hadoop-based data lake. While a traditional on-premise data lake might address the immediate needs for scalability and flexibility, research suggests that it may fall short in supporting key aspects of the user experience. This Knowledge Brief investigates the impact of a data lake maintained in a cloud or hybrid infrastructure.
Tags : 
    
IBM
Published By: IBM     Published Date: Apr 18, 2017
The data integration tool market was worth approximately $2.8 billion in constant currency at the end of 2015, an increase of 10.5% from the end of 2014. The discipline of data integration comprises the practices, architectural techniques and tools that ingest, transform, combine and provision data across the spectrum of information types in the enterprise and beyond — to meet the data consumption requirements of all applications and business processes. The biggest changes in the market from 2015 are the increased demand for data virtualization, the growing use of data integration tools to combine "data lakes" with existing integration solutions, and the overall expectation that data integration will become cloud- and on-premises-agnostic.
Tags : 
data integration, data security, data optimization, data virtualization, database security, data analytics, data innovation
    
IBM
Published By: IBM     Published Date: Jul 06, 2017
Companies today increasingly look for ways to house multiple disparate forms of data under the same roof, maintaining original integrity and attributes. Enter the Hadoop-based data lake. While a traditional on-premise data lake might address the immediate needs for scalability and flexibility, research suggests that it may fall short in supporting key aspects of the user experience. This Knowledge Brief investigate the impact of a data lake maintained in a cloud or hybrid infrastucture.
Tags : 
data lake, user experience, knowledge brief, cloud infrastructure
    
IBM
Published By: Waterline Data & Research Partners     Published Date: Nov 07, 2016
Today, businesses pour Big Data into data lakes to help them answer the big questions: Which product to take to market? How to reduce fraud? How to retain more customers? People need to get these answers faster than ever before to reduce “time to answer” from months to minutes. The data is coming in fast and the answers must come just as fast. The answer is self-service data preparation and analytics tools, but with that comes an expectation that the right data is going to be there. Only by using a data catalog can you find the right data quickly to get the expected insight and business value. Download this white paper to learn more!
Tags : 
    
Waterline Data & Research Partners
Published By: Waterline Data & Research Partners     Published Date: Nov 07, 2016
For many years, traditional businesses have had a systematic set of processes and practices for deploying, operating and disposing of tangible assets and some forms of intangible asset. Through significant growth in our inquiry discussions with clients, and in observing increased attention from industry regulators, Gartner now sees the recognition that information is an asset becoming increasingly pervasive. At the same time, CDOs and other data and analytics leaders must take into account both internally generated datasets and exogenous sources, such as data from partners, open data and content from data brokers and analytics marketplaces, as they come to terms with the ever-increasing quantity and complexity of information assets. This task is clearly impossible if the organization lacks a clear view of what data is available, how to access it, its fitness for purpose in the contexts in which it is needed, and who is responsible for it.
Tags : 
    
Waterline Data & Research Partners
Published By: Amazon Web Services     Published Date: Feb 01, 2018
Moving Beyond Traditional Decision Support Future-proofing a business has never been more challenging. Customer preferences turn on a dime, and their expectations for service and support continue to rise. At the same time, the data lifeblood that flows through a typical organization is more vast, diverse, and complex than ever before. More companies today are looking to expand beyond traditional means of decision support, and are exploring how AI can help them find and manage the “unknown unknowns” in our fast-paced business environment.
Tags : 
predictive, analytics, data lake, infrastructure, natural language processing, amazon
    
Amazon Web Services
Published By: Teradata     Published Date: May 01, 2015
Creating value in your enterprise undoubtedly creates competitive advantage. Making sense of the data that is pouring into the data lake, accelerating the value of the data, and being able to manage that data effectively is a game-changer. Michael Lang explores how to achieve this success in “Data Preparation in the Hadoop Data Lake.” Enterprises experiencing success with data preparation acknowledge its three essential competencies: structuring, exploring, and transforming. Teradata Loom offers a new approach by enabling enterprises to get value from the data lake with an interactive method for preparing big data incrementally and iteratively. As the first complete data management solution for Hadoop, Teradata Loom enables enterprises to benefit from better and faster insights from a continuous data science workflow, improving productivity and business value. To learn more about how Teradata Loom can help improve productivity in the Hadoop Data Lake, download this report now.
Tags : 
data management, productivity, hadoop, interactive, enterprise, enterprise applications
    
Teradata
Published By: IBM Watson Health     Published Date: Nov 10, 2017
To address the volume, velocity, and variety of data necessary for population health management, healthcare organizations need a big data solution that can integrate with other technologies to optimize care management, care coordination, risk identification and stratification and patient engagement. Read this whitepaper and discover how to build a data infrastructure using the right combination of data sources, a “data lake” framework with massively parallel computing that expedites the answering of queries and the generation of reports to support care teams, analytic tools that identify care gaps and rising risk, predictive modeling, and effective screening mechanisms that quickly find relevant data. In addition to learning about these crucial tools for making your organization’s data infrastructure robust, scalable, and flexible, get valuable information about big data developments such as natural language processing and geographical information systems. Such tools can provide insig
Tags : 
population health management, big data, data, data analytics, big data solution, data infrastructure, analytic tools, predictive modeling
    
IBM Watson Health
Published By: StreamSets     Published Date: Sep 24, 2018
The advent of Apache Hadoop™ has led many organizations to replatform their existing architectures to reduce data management costs and find new ways to unlock the value of their data. One area that benefits from replatforming is the data warehouse. According to research firm Gartner, “starting in 2018, data warehouse managers will benefit from hybrid architectures that eliminate data silos by blending current best practices with ‘big data’ and other emerging technology types.” There’s undoubtedly a lot to ain by modernizing data warehouse architectures to leverage new technologies, however the replatforming process itself can be harder than it would at first appear. Hadoop projects are often taking longer than they need to create the promised benefits, and often times problems can be avoided if you know what to avoid from the onset.
Tags : 
replatforming, age, data, lake, apache, hadoop
    
StreamSets
Published By: StreamSets     Published Date: Sep 24, 2018
Imagine you’re running a factory but without a supply chain management system or industrial controls. Instead, you expect your customers to find and fix your delivery and quality problems. Sound ludicrous? Well, in many enterprises that’s the current “supply chain management” process for big and fast data. It relies on the lightly monitored dumping of unsanitized data into a data lake or cloud store, forcing data scientists and business users to deal with failures from data availability and accuracy issues.
Tags : 
dataflow, operations, factory, industrial
    
StreamSets
Published By: Dell EMC     Published Date: Jun 29, 2016
EMC Isilon scale-out network-attached storage (NAS) is a simple and scalable platform to build a scale-out data lake and persist enterprise files of all sizes that scale from terabytes to petabytes in a single cluster. It enables you to consolidate storage silos, improve storage utilization, reduce costs, while providing you a future proofed platform to run today and tomorrow's workloads.
Tags : 
network, storage, data, best practices
    
Dell EMC
Published By: Dell EMC     Published Date: Jun 29, 2016
Traditional DAS or Scale-out NAS for Hadoop Analytics? Here are our top 8 reasons to choose a Scale-Out Data Lake on EMC Isilon for Hadoop Analytics.
Tags : 
emc isilon, storage, best practices, data
    
Dell EMC
Previous   1 2    Next    
Search      

Related Topics

Add Your White Papers

Get your white papers featured in the Data Center Frontier Paper Library contact:
Kevin@DataCenterFrontier.com