Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Red Hat Announces Change Data Capture Capabilities for Streaming Applications with Integration Release

Red Hat has announced that the latest release of Red Hat Integration is available, designed for building out cloud-based, event-driven applications for varieties of streaming data and processing information as it’s created.

Red Hat’s change data capture and service registry components, based on the open source Debezium and Apicurio projects, identify changes in an application’s data and automatically publish those changes to the event-streaming backbone, such as Apache Kafka, and governing data movement to prevent runtime data errors, Red Hat said.

“These components are used alongside core integration, messaging, data streaming and API management offerings to connect and act on data,” Red Hat said in its announcement. “While they are already tightly integrated at the runtime level, they can still require additional manual work by an organization’s IT operations team to get fully installed and running. We’ve changed that.”

Kubernetes Operators codify operational knowledge that is required for packaging, deploying and managing a Kubernetes-native application. Once codified, Operators can automate these tasks. Customers have been able to use Operators to deploy core Red Hat Integration products for some time, including Red Hat FuseRed Hat 3scale API Management and Red Hat AMQ. Now, customers can install, upgrade and manage Red Hat Integration components using the new Red Hat Integration Operator for Red Hat OpenShift. This serves to improve the user experience by providing direct access to all Operators across Red Hat’s integration portfolio, from core products to auxiliary components, Red Hat said.

“In addition to streamlining the customer experience with automated installs for change data capture and the service registry, we’ve tightened the connection between the two components so customers can automatically populate JSON and Apache Avro schema for discovery and enforcement. With this, publishers and consumers can be more disciplined about interpreting the data they follow,” the company said. “Lastly, we’ve expanded the list of target databases with a new connector for IBM Db2. Customers can set their applications to detect changes to Db2 databases—as well as MongoDB, MySQL, PostgreSQL, and Microsoft SQL Server—and capture them for streaming processing.”

source: Red Hat

Leave a Comment

*

Resource Links: