Summary
When a Function App has many Kafka triggers (e.g., 31 triggers), each trigger creates its own Confluent.Kafka Consumer instance. Since librdkafka internally creates ~25 threads per Consumer, this results in ~775 threads, causing significant resource overhead.
This issue proposes implementing a Consumer Sharing pattern where triggers with matching configurations share a single underlying Consumer.
Detail Report(Internal Only)
Problem
| Current Behavior |
Impact |
| 1 Trigger = 1 Consumer |
~25 threads per trigger |
| 31 Triggers |
~775 threads total |
| High memory usage |
Resource exhaustion risk |
| Many broker connections |
Connection overhead |
Root Cause
This is a fundamental librdkafka architecture limitation - each Consumer/Producer instance creates fixed internal threads (main thread, broker threads, etc.). This cannot be changed at the Confluent.Kafka level.
Existing TODO in Code
// KafkaTriggerAttributeBindingProvider.cs:71
Task<IListener> listenerCreator(ListenerFactoryContext factoryContext, bool singleDispatch)
{
// TODO: reuse connections if they match with others in same function app
var listener = new KafkaListener<TKey, TValue>(...);
Proposed Solution
Implement Consumer Sharing at the Extension level:
Before (Current)
31 Triggers → 31 Consumers → 31 × 25 = ~775 threads
After (Consumer Sharing)
31 Triggers (same config) → 1 Shared Consumer → 1 × 25 = ~25 threads
Key Components
- ConsumerSharingKey: Determines when consumers can be shared (broker, consumerGroup, security settings)
- IKafkaConsumerPool: Manages shared consumer instances
- SharedConsumer: Wrapper that dispatches messages to multiple listeners
Affected Files
| File |
Change Type |
KafkaTriggerAttributeBindingProvider.cs |
Major - Use consumer pool |
KafkaListener.cs |
Major - Use shared consumer |
FunctionExecutorBase.cs |
Medium - Commit strategy |
KafkaWebJobsStartup.cs |
Minor - DI registration |
Considerations
- Offset Commit: Need per-handler offset tracking to avoid conflicts
- Error Isolation: Errors in one handler shouldn't affect others
- Backward Compatibility: Should be opt-in initially (
EnableConsumerSharing = false default)
- Lifecycle: Reference counting for consumer lifecycle management
Expected Impact
- Thread reduction: ~97% (775 → 25 for 31 triggers)
- Memory reduction: Significant
- Connection reduction: From N connections to 1
Specification Document
A detailed specification is available at:
docs/specs/SPEC-consumer-sharing-pattern.md
Related
Summary
When a Function App has many Kafka triggers (e.g., 31 triggers), each trigger creates its own Confluent.Kafka Consumer instance. Since librdkafka internally creates ~25 threads per Consumer, this results in ~775 threads, causing significant resource overhead.
This issue proposes implementing a Consumer Sharing pattern where triggers with matching configurations share a single underlying Consumer.
Detail Report(Internal Only)
Problem
Root Cause
This is a fundamental librdkafka architecture limitation - each Consumer/Producer instance creates fixed internal threads (main thread, broker threads, etc.). This cannot be changed at the Confluent.Kafka level.
Existing TODO in Code
Proposed Solution
Implement Consumer Sharing at the Extension level:
Before (Current)
After (Consumer Sharing)
Key Components
Affected Files
KafkaTriggerAttributeBindingProvider.csKafkaListener.csFunctionExecutorBase.csKafkaWebJobsStartup.csConsiderations
EnableConsumerSharing = falsedefault)Expected Impact
Specification Document
A detailed specification is available at:
docs/specs/SPEC-consumer-sharing-pattern.mdRelated