Cocurrent consume
1 | ConcurrentKafkaListenerContainerFactory<String, String> factory = |
Increase kafka consumer concurrent thread can increase out put speed.
Be careful,topic partition num may cause bottleneck.
The consumer groups worker should not bigger than partition num, otherwise thread be wasted.
The system’s lowest part define the performence.
If consumer’s post order process is slow, thread may be block in a brief.
In more worse situation, the session can out of time, offset be reset.
Consume in batch
1 | ConcurrentKafkaListenerContainerFactory<String, String> factory = |
1 | consumerConfig.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPollRecords); |
the default max.poll.interval.ms=300000
, max.poll.records=50
every batch fetch fifty messages.
this may cause warn Auto offset commit failed
Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member.
This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms,
which typically implies that the poll loop is spending too much time message processing.
You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
you can reduce max.poll.records
, or increse session.timeout.ms
heatbeat.interval.ms
must be lower than session.timeout.ms
, and is usually set to a 1/3 of the timeout value.
change enable.auto.commit
to false
, use spring-kafka
internal mechanism manage message’s commit.
if enable.auto.commit
is true, then if the processing time is lower then the auto.commit.interval.ms
the ack[commit] will wait for the cycle come.
Increase partitions
1 | bin/kafka-topics.sh --zookeeper localhost:2181 --alter --partitions 30 --topic demo |
Partition can only increase not decrease, in sync increase the consumer groups’ worker number can increase output speed.
If kafka cluster run in three machine, then cluster have 3 brokers, create demo topic with 3 partitions.
For high availability, set every partition have 2 replication-factors, then every broker will have two kafka log files.
If one machine deaded, ther two can still work.
If set replication-factors to 3, then every two machine deaded topic can still work.
brokerA partiton0/partiton1
brokerB partiton1/partiton2
brokerC partiton2/partiton0
brokerA partiton0/partiton1/partiton2
brokerB partiton1/partiton2/partiton0
brokerC partiton2/partiton0/partiton1
modify replication-factor
https://blog.csdn.net/russle/article/details/83421904
optimize consumer
https://docs.spring.io/spring-kafka/reference/html/_reference.html
https://blog.csdn.net/zwgdft/article/details/54633105
Warn:
https://www.jianshu.com/p/4e00dff97f39
https://blog.csdn.net/zwx19921215/article/details/83269445