Pengenalan kepada Penyambung Kafka

1. Gambaran keseluruhan

Apache Kafka® adalah platform streaming yang diedarkan. Dalam tutorial sebelumnya, kami membincangkan bagaimana menerapkan pengguna dan pengeluar Kafka menggunakan Spring.

Dalam tutorial ini, kita akan belajar bagaimana menggunakan Penyambung Kafka.

Kami akan melihat:

  • Pelbagai jenis Penyambung Kafka
  • Ciri dan mod Kafka Connect
  • Konfigurasi penyambung menggunakan fail harta tanah dan juga REST API

2. Asas Penyambung Kafka dan Penyambung Kafka

Kafka Connect adalah kerangka kerja untuk menghubungkan Kafka dengan sistem luaran seperti pangkalan data, kedai nilai kunci, indeks carian, dan sistem fail, dengan menggunakan apa yang disebut Penyambung .

Penyambung Kafka adalah komponen yang siap digunakan, yang dapat membantu kita mengimport data dari sistem luaran ke dalam topik Kafka dan mengeksport data dari topik Kafka ke dalam sistem luaran . Kita boleh menggunakan implementasi penyambung yang ada untuk sumber data biasa dan sink atau melaksanakan penyambung kita sendiri.

A penyambung sumber mengumpul data dari sistem. Sistem sumber boleh menjadi keseluruhan pangkalan data, jadual aliran, atau broker mesej. Penyambung sumber juga dapat mengumpulkan metrik dari pelayan aplikasi ke topik Kafka, menjadikan data tersedia untuk pemprosesan aliran dengan latensi rendah.

A penyambung sink menyampaikan data dari topik Kafka ke dalam sistem lain, yang mungkin menjadi indeks seperti Elasticsearch, sistem batch seperti Hadoop, atau apa-apa jenis pangkalan data.

Beberapa penyambung dikekalkan oleh komuniti, sementara yang lain disokong oleh Confluent atau rakannya. Sungguh, kita dapat mencari penyambung untuk sistem yang paling popular, seperti S3, JDBC, dan Cassandra, hanya untuk beberapa nama.

3. Ciri-ciri

Ciri Kafka Connect merangkumi:

  • Kerangka kerja untuk menghubungkan sistem luaran dengan Kafka - ia mempermudah pembangunan, penggunaan, dan pengurusan penyambung
  • Mod yang diedarkan dan berdiri sendiri - ini membantu kami menyebarkan kelompok besar dengan memanfaatkan sifat Kafka yang diedarkan, serta persediaan untuk pengembangan, pengujian, dan penggunaan produksi kecil
  • Antara muka REST - kami boleh menguruskan penyambung menggunakan REST API
  • Pengurusan offset automatik - Kafka Connect membantu kami menangani proses komit ofset, yang menjimatkan masalah untuk melaksanakan bahagian pembangunan penyambung yang rawan kesalahan ini secara manual
  • Diagihkan dan boleh diskalakan secara lalai - Kafka Connect menggunakan protokol pengurusan kumpulan yang ada; kita dapat menambahkan lebih banyak pekerja untuk meningkatkan kluster Kafka Connect
  • Pengaliran dan penyatuan kumpulan - Kafka Connect adalah penyelesaian yang ideal untuk merapatkan sistem aliran dan kumpulan data yang berkaitan dengan kemampuan Kafka yang ada
  • Transformasi - ini membolehkan kita membuat pengubahsuaian ringkas dan ringan pada mesej individu

4. Persediaan

Daripada menggunakan pengedaran Kafka biasa, kami akan memuat turun Confluent Platform, sebaran Kafka yang disediakan oleh Confluent, Inc., syarikat di belakang Kafka. Confluent Platform dilengkapi dengan beberapa alat dan klien tambahan, berbanding dengan Kafka biasa, serta beberapa Penyambung pra-binaan tambahan.

Untuk kes kami, edisi Open Source mencukupi, yang boleh didapati di laman Confluent.

5. Mulakan Pantas Kafka Connect

Sebagai permulaan, kita akan membincangkan prinsip Kafka Connect, menggunakan Penyambungnya yang paling asas, iaitu penyambung sumber fail dan penyambung sink fail .

Dengan selesa, Confluent Platform dilengkapi dengan kedua-dua penyambung ini, serta konfigurasi rujukan.

5.1. Konfigurasi Penyambung Sumber

Untuk penyambung sumber, konfigurasi rujukan boleh didapati di $ CONFLUENT_HOME / etc / kafka / connect-file-source.properties :

name=local-file-source connector.class=FileStreamSource tasks.max=1 topic=connect-test file=test.txt

Konfigurasi ini mempunyai beberapa sifat yang umum untuk semua penyambung sumber:

  • nama adalah nama yang ditentukan pengguna untuk contoh penyambung
  • connector.class menentukan kelas pelaksana, pada dasarnya jenis penyambung
  • task.max menentukan berapa banyak contoh penyambung sumber kita harus berjalan secara selari, dan
  • topik mentakrifkan topik ke mana penyambung harus menghantar output

Dalam kes ini, kami juga mempunyai atribut khusus penyambung:

  • file menentukan fail dari mana penyambung harus membaca input

Agar ini dapat berfungsi, mari buat fail asas dengan beberapa kandungan:

echo -e "foo\nbar\n" > $CONFLUENT_HOME/test.txt

Perhatikan bahawa direktori yang berfungsi adalah $ CONFLUENT_HOME.

5.2. Konfigurasi Penyambung Tenggelam

Untuk penyambung sink kami, kami akan menggunakan konfigurasi rujukan di $ CONFLUENT_HOME / etc / kafka / connect-file-sink.properties :

name=local-file-sink connector.class=FileStreamSink tasks.max=1 file=test.sink.txt topics=connect-test

Secara logik, ia mengandungi parameter yang sama, walaupun kali ini penyambung. Kelas menentukan pelaksanaan penyambung sink, dan fail adalah lokasi di mana penyambung harus menulis kandungannya.

5.3. Konfigurasi Pekerja

Akhirnya, kita harus mengkonfigurasi pekerja Connect, yang akan menyatukan dua penyambung kita dan melakukan kerja membaca dari penyambung sumber dan menulis ke penyambung sink.

Untuk itu, kita boleh menggunakan $ CONFLUENT_HOME / etc / kafka / connect-standalone.properties :

bootstrap.servers=localhost:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false offset.storage.file.filename=/tmp/connect.offsets offset.flush.interval.ms=10000 plugin.path=/share/java

Note that plugin.path can hold a list of paths, where connector implementations are available

As we'll use connectors bundled with Kafka, we can set plugin.path to $CONFLUENT_HOME/share/java. Working with Windows, it might be necessary to provide an absolute path here.

For the other parameters, we can leave the default values:

  • bootstrap.servers contains the addresses of the Kafka brokers
  • key.converter and value.converter define converter classes, which serialize and deserialize the data as it flows from the source into Kafka and then from Kafka to the sink
  • key.converter.schemas.enable and value.converter.schemas.enable are converter-specific settings
  • offset.storage.file.filename is the most important setting when running Connect in standalone mode: it defines where Connect should store its offset data
  • offset.flush.interval.ms defines the interval at which the worker tries to commit offsets for tasks

And the list of parameters is quite mature, so check out the official documentation for a complete list.

5.4. Kafka Connect in Standalone Mode

And with that, we can start our first connector setup:

$CONFLUENT_HOME/bin/connect-standalone \ $CONFLUENT_HOME/etc/kafka/connect-standalone.properties \ $CONFLUENT_HOME/etc/kafka/connect-file-source.properties \ $CONFLUENT_HOME/etc/kafka/connect-file-sink.properties

First off, we can inspect the content of the topic using the command line:

$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic connect-test --from-beginning

As we can see, the source connector took the data from the test.txt file, transformed it into JSON, and sent it to Kafka:

{"schema":{"type":"string","optional":false},"payload":"foo"} {"schema":{"type":"string","optional":false},"payload":"bar"}

And, if we have a look at the folder $CONFLUENT_HOME, we can see that a file test.sink.txt was created here:

cat $CONFLUENT_HOME/test.sink.txt foo bar

As the sink connector extracts the value from the payload attribute and writes it to the destination file, the data in test.sink.txt has the content of the original test.txt file.

Now let's add more lines to test.txt.

When we do, we see that the source connector detects these changes automatically.

We only have to make sure to insert a newline at the end, otherwise, the source connector won't consider the last line.

At this point, let's stop the Connect process, as we'll start Connect in distributed mode in a few lines.

6. Connect's REST API

Until now, we made all configurations by passing property files via the command line. However, as Connect is designed to run as a service, there is also a REST API available.

By default, it is available at //localhost:8083. A few endpoints are:

  • GET /connectors – returns a list with all connectors in use
  • GET /connectors/{name} – returns details about a specific connector
  • POST /connectors – creates a new connector; the request body should be a JSON object containing a string name field and an object config field with the connector configuration parameters
  • GET /connectors/{name}/status – returns the current status of the connector – including if it is running, failed or paused – which worker it is assigned to, error information if it has failed, and the state of all its tasks
  • DELETE /connectors/{name} – deletes a connector, gracefully stopping all tasks and deleting its configuration
  • GET /connector-plugins – returns a list of connector plugins installed in the Kafka Connect cluster

The official documentation provides a list with all endpoints.

We'll use the REST API for creating new connectors in the following section.

7. Kafka Connect in Distributed Mode

The standalone mode works perfectly for development and testing, as well as smaller setups. However, if we want to make full use of the distributed nature of Kafka, we have to launch Connect in distributed mode.

By doing so, connector settings and metadata are stored in Kafka topics instead of the file system. As a result, the worker nodes are really stateless.

7.1. Starting Connect

A reference configuration for distributed mode can be found at $CONFLUENT_HOME/etc/kafka/connect-distributed.properties.

Parameters are mostly the same as for standalone mode. There are only a few differences:

  • group.id defines the name of the Connect cluster group. The value must be different from any consumer group ID
  • offset.storage.topic, config.storage.topic and status.storage.topic define topics for these settings. For each topic, we can also define a replication factor

Again, the official documentation provides a list with all parameters.

We can start Connect in distributed mode as follows:

$CONFLUENT_HOME/bin/connect-distributed $CONFLUENT_HOME/etc/kafka/connect-distributed.properties

7.2. Adding Connectors Using the REST API

Now, compared to the standalone startup command, we didn't pass any connector configurations as arguments. Instead, we have to create the connectors using the REST API.

To set up our example from before, we have to send two POST requests to //localhost:8083/connectors containing the following JSON structs.

First, we need to create the body for the source connector POST as a JSON file. Here, we'll call it connect-file-source.json:

{ "name": "local-file-source", "config": { "connector.class": "FileStreamSource", "tasks.max": 1, "file": "test-distributed.txt", "topic": "connect-distributed" } }

Note how this looks pretty similar to the reference configuration file we used the first time.

And then we POST it:

curl -d @"$CONFLUENT_HOME/connect-file-source.json" \ -H "Content-Type: application/json" \ -X POST //localhost:8083/connectors

Then, we'll do the same for the sink connector, calling the file connect-file-sink.json:

{ "name": "local-file-sink", "config": { "connector.class": "FileStreamSink", "tasks.max": 1, "file": "test-distributed.sink.txt", "topics": "connect-distributed" } }

And perform the POST like before:

curl -d @$CONFLUENT_HOME/connect-file-sink.json \ -H "Content-Type: application/json" \ -X POST //localhost:8083/connectors

If needed, we can verify, that this setup is working correctly:

$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic connect-distributed --from-beginning {"schema":{"type":"string","optional":false},"payload":"foo"} {"schema":{"type":"string","optional":false},"payload":"bar"}

And, if we have a look at the folder $CONFLUENT_HOME, we can see that a file test-distributed.sink.txt was created here:

cat $CONFLUENT_HOME/test-distributed.sink.txt foo bar

After we tested the distributed setup, let's clean up, by removing the two connectors:

curl -X DELETE //localhost:8083/connectors/local-file-source curl -X DELETE //localhost:8083/connectors/local-file-sink

8. Transforming Data

8.1. Supported Transformations

Transformations enable us to make simple and lightweight modifications to individual messages.

Kafka Connect supports the following built-in transformations:

  • InsertField – Add a field using either static data or record metadata
  • ReplaceField – Filter or rename fields
  • MaskField – Replace a field with the valid null value for the type (zero or an empty string, for example)
  • HoistField – Wrap the entire event as a single field inside a struct or a map
  • ExtractField – Extract a specific field from struct and map and include only this field in the results
  • SetSchemaMetadata – Modify the schema name or version
  • TimestampRouter – Modify the topic of a record based on original topic and timestamp
  • RegexRouter – Modify the topic of a record based on original topic, a replacement string, and a regular expression

A transformation is configured using the following parameters:

  • transforms – A comma-separated list of aliases for the transformations
  • transforms.$alias.type – Class name for the transformation
  • transforms.$alias.$transformationSpecificConfig – Configuration for the respective transformation

8.2. Applying a Transformer

To test some transformation features, let's set up the following two transformations:

  • First, let's wrap the entire message as a JSON struct
  • After that, let's add a field to that struct

Before applying our transformations, we have to configure Connect to use schemaless JSON, by modifying the connect-distributed.properties:

key.converter.schemas.enable=false value.converter.schemas.enable=false

After that, we have to restart Connect, again in distributed mode:

$CONFLUENT_HOME/bin/connect-distributed $CONFLUENT_HOME/etc/kafka/connect-distributed.properties

Again, we need to create the body for the source connector POST as a JSON file. Here, we'll call it connect-file-source-transform.json.

Besides the already known parameters, we add a few lines for the two required transformations:

{ "name": "local-file-source", "config": { "connector.class": "FileStreamSource", "tasks.max": 1, "file": "test-transformation.txt", "topic": "connect-transformation", "transforms": "MakeMap,InsertSource", "transforms.MakeMap.type": "org.apache.kafka.connect.transforms.HoistField$Value", "transforms.MakeMap.field": "line", "transforms.InsertSource.type": "org.apache.kafka.connect.transforms.InsertField$Value", "transforms.InsertSource.static.field": "data_source", "transforms.InsertSource.static.value": "test-file-source" } }

After that, let's perform the POST:

curl -d @$CONFLUENT_HOME/connect-file-source-transform.json \ -H "Content-Type: application/json" \ -X POST //localhost:8083/connectors

Let's write some lines to our test-transformation.txt:

Foo Bar

If we now inspect the connect-transformation topic, we should get the following lines:

{"line":"Foo","data_source":"test-file-source"} {"line":"Bar","data_source":"test-file-source"}

9. Using Ready Connectors

After using these simple connectors, let's have a look at more advanced ready-to-use connectors, and how to install them.

9.1. Where to Find Connectors

Pre-built connectors are available from different sources:

  • A few connectors are bundled with plain Apache Kafka (source and sink for files and console)
  • Some more connectors are bundled with Confluent Platform (ElasticSearch, HDFS, JDBC, and AWS S3)
  • Also check out Confluent Hub, which is kind of an app store for Kafka connectors. The number of offered connectors is growing continuously:
    • Confluent connectors (developed, tested, documented and are fully supported by Confluent)
    • Certified connectors (implemented by a 3rd party and certified by Confluent)
    • Community-developed and -supported connectors
  • Beyond that, Confluent also provides a Connectors Page, with some connectors which are also available at the Confluent Hub, but also with some more community connectors
  • And finally, there are also vendors, who provide connectors as part of their product. For example, Landoop provides a streaming library called Lenses, which also contains a set of ~25 open source connectors (many of them also cross-listed in other places)

9.2. Installing Connectors from Confluent Hub

The enterprise version of Confluent provides a script for installing Connectors and other components from Confluent Hub (the script is not included in the Open Source version). If we're using the enterprise version, we can install a connector using the following command:

$CONFLUENT_HOME/bin/confluent-hub install confluentinc/kafka-connect-mqtt:1.0.0-preview

9.3. Installing Connectors Manually

If we need a connector, which is not available on Confluent Hub or if we have the Open Source version of Confluent, we can install the required connectors manually. For that, we have to download and unzip the connector, as well as move the included libs to the folder specified as plugin.path.

For each connector, the archive should contain two folders that are interesting for us:

  • The lib folder contains the connector jar, for example, kafka-connect-mqtt-1.0.0-preview.jar, as well as some more jars required by the connector
  • The etc folder holds one or more reference config files

We have to move the lib folder to $CONFLUENT_HOME/share/java, or whichever path we specified as plugin.path in connect-standalone.properties and connect-distributed.properties. In doing so, it might also make sense to rename the folder to something meaningful.

We can use the config files from etc either by referencing them while starting in standalone mode, or we can just grab the properties and create a JSON file from them.

10. Conclusion

In this tutorial, we had a look at how to install and use Kafka Connect.

Kami melihat jenis penyambung, baik sumber dan sink. Kami juga melihat beberapa ciri dan mod yang boleh dijalankan oleh Connect. Kemudian, kami mengkaji transformer. Dan akhirnya, kami belajar di mana untuk mendapatkan dan bagaimana memasang penyambung tersuai.

Seperti biasa, fail konfigurasi boleh didapati di GitHub.