Follow these steps to enable and configure the Kafka plugin for Ranger.
Before you begin
The default policy user (ambari-qa) used for a plug-in should be an existing valid user on the system which is configured for Ranger.
Procedure
- From the Ambari web interface, select the Ranger service and then open the Configs tab. Select the Ranger Plugin tab.
- In the Ranger Plugin section, enable the Kafka Ranger Plugin, and then click Save.
Note
- The Kafka Ranger plugin requires Kerberos. You will see a warning if you try to enable Kafka on an non-Kerberized cluster. For details see the Kafka Plugin section of the Ranger FAQ.
- Topic creation can be authorized via Ranger, but only if the topic is being auto-created by consumers or producers. The recommended policy setup to authorize topic auto-creation for producers or consumers is as follows:
- Create a policy where resource is all topics, i.e. *.
- For producers, create a policy item under this policy which grants both Produce and Configure permissions to the relevant user or user-groups.
- For consumers, create a policy item under this policy which grants both Consume and Configure permissions to the relevant user or user-groups.
Example
The following is an example of how to use the Kafka Ranger plugin for authorization:
- Ensure that the default policy created when the plugin is enabled is enabled and synced.
- Ensure that Kerberos tickets are not expired by using the kinit command as the kafka user.
- Run the following command to create a topic in Kafka. Run the command as the kafka user and from the /usr/iop/current/kafka-broker/ directory:
bin/kafka-topics.sh --create --zookeeper hostname.fyre.ibm.com:2181 --replication-factor 1
--partitions 1 --topic test-topic - Create files named producer.properties and consumer.properties, each with a single line with the value security.protocol=SASL_PLAINTEXT.
- Run the following command to start the producer. Run the command as the kafka user and from the /usr/iop/current/kafka-broker/ directory:
bin/kafka-console-producer.sh --broker-list <cluster url>:6667 --topic test-topic
--producer.config <path>/producer.properties - In another window, run the following command to start the consumer. Run the command as the root user and from the /usr/iop/current/kafka-broker/ directory:
bin/kafka-console-consumer.sh --topic test-topic --from-beginning --bootstrap-server <cluster url>:6667
--consumer.config <path>/consumer.properties - In the producer window, write some test messages and observe that they appear in the consumer window.
- Disable the policy and observe that error messages show up in both windows that they can no longer connect.
- Re-enable the policy and observe that messages can be sent and received properly again.