Limitation on dtab max number?

Hey guys !

Quick question as I can’t seem to find any info on the documentation.
I am using linkerd with namerd to set up my dtab dynamically, my namerd config is the following

Blockquote
config.yaml: |-
admin:
ip: 0.0.0.0
port:9991
storage:
kind: io.l5d.k8s
experimental: true
namers:
- kind: io.l5d.k8s
host: 127.0.0.1
port: 8001
interfaces:
- kind: io.l5d.thriftNameInterpreter
ip: 0.0.0.0
port: 4100
- kind: io.l5d.httpController
ip: 0.0.0.0
port: 4180

However whenever I am reaching more than ~14 routes in my dtab, the namerd API is throwing me a HTTP 400 for dtab Malformed (even though it is not)
I wanted to know if there is a way for me to increase the max number of dtabs, maybe change my storage way ? Or if it is a hardcoded limitation from linkerd/namerd. Or I might be missing something obvious ?!

Thanks a lot
Cheers !!

There should be no such limit. Many people use Linkerd with much larger dtabs. Can you post the dtab so that we can take a look?

1 Like

Hey William ! Thanks for getting back at me,
Our dtab is like this:

/srv =>/#/io.l5d.k8s/default/grpc ;
/grpc =>/srv ;
/svc =>/$/io.buoyant.http.domainToPathPfx/grpc ;
/svc/package.v11.packageBlock =>/svc/package-11 ;
/svc/package2.v12.package2Block =>/svc/package2-12 ;
/svc/packageblock.v14.packageBlock =>/svc/package-14 ;
/svc/package.v13.packageBlock =>/svc/package-13 ;
/svc/package.v14.packageBlock =>/svc/package-14 ;
/svc/package2.v10.package2Block =>/svc/package2-10 ;
/svc/package2.v11.package2Block =>/svc/package2-11 ;
/svc/package.v16.packageBlock =>/svc/package-16 ;
/svc/package2.v13.package2Block =>/svc/package2-13

As we are routing GRPC calls to specific services in kubernetes. We are adding a new route for any new service :confused:, not sure this is the best solution.

Linkerd conf is the following

 admin:
    port: 9990
    ip: 0.0.0.0

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
    sampleRate: 0.25

    usage:
    orgId: linkerd-examples-daemonset-namerd

    routers:
    - protocol: h2
    label: outgoing
    experimental: true
    identifier:
      kind: io.l5d.header.path
      segments: 1
    interpreter:
      kind: io.l5d.namerd
      dst: /$/inet/namerd.default.svc.cluster.local/4100
      namespace: internal
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: default
        port: incoming
        service: l5d
    servers:
    - port: 4140
      ip: 0.0.0.0
    service:
      responseClassifier:
        kind: io.l5d.h2.grpc.neverRetryable

    - protocol: h2
    label: incoming
    experimental: true
    identifier:
      kind: io.l5d.header.path
      segments: 1
    interpreter:
      kind: io.l5d.namerd
      dst: /$/inet/namerd.default.svc.cluster.local/4100
      namespace: internal
      transformers:
      - kind: io.l5d.k8s.localnode
    servers:
    - port: 4141
      ip: 0.0.0.0
    service:
      responseClassifier:
        kind: io.l5d.h2.grpc.neverRetryable

    - protocol: h2
    experimental: true
    label: external
    interpreter:
      kind: io.l5d.namerd
      dst: /$/inet/namerd.default.svc.cluster.local/4100
      namespace: external
    servers:
    - port: 4142
      ip: 0.0.0.0

    - protocol: http
    label: http-outgoing
    dtab: |
      /srv        => /#/io.l5d.k8s/default/http;
      /host       => /srv;
      /svc        => /host;
    interpreter:
      kind: io.l5d.namerd
      dst: /$/inet/namerd.default.svc.cluster.local/4100
      namespace: http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: default
        port: http-incoming
        service: l5d
    servers:
    - port: 4240
      ip: 0.0.0.0

    - protocol: http
    label: http-incoming
    dtab: |
      /srv        => /#/io.l5d.k8s/default/http;
      /host       => /srv;
      /svc        => /host;
    interpreter:
      kind: io.l5d.namerd
      dst: /$/inet/namerd.default.svc.cluster.local/4100
      namespace: http
      transformers:
      - kind: io.l5d.k8s.localnode
    servers:
    - port: 4241
      ip: 0.0.0.0

Linkerd version: 1.3.1 (running as a daemon set on k8s)
kubectl version: 1.4.0 (should update it)
kubernetes version: 1.7.11
Namerd version is : 0.9.1

If you need any other info, happy to provide ! A big guess would be I just need to update all my versions ?!
Thanks a lot ! Cheers

Just to confirm, that dtab has 12 routes, and it works—is that right?

Can you also provide an example of a 14-line dtab that doesn’t work?

Indeed this dtab is working,

if you take this one,

/srv =>/#/io.l5d.k8s/default/grpc ;
/grpc =>/srv ;
/svc =>/$/io.buoyant.http.domainToPathPfx/grpc ;
/svc/package.v16.packageBlock =>/svc/package-16 ;
/svc/package2.v16.PackageServiceBlock =>/svc/package2-16 ;
/svc/package.v17.packageBlock =>/svc/package-17 ;
/svc/feature.v9.FeatureBlock =>/svc/feature-9 ;
/svc/oldfeature.v10.OldFeatureBlock =>/svc/oldfeature-10 ;
/svc/newfeature.v17.newfeatureBlock =>/svc/newfeature-17 ;
/svc/feature.v10.FeatureBlock =>/svc/feature-10 ;
/svc/package.v9.packageBlock =>/svc/package-9 ;
/svc/package.v10.packageBlock =>/svc/package-10 ;
/svc/package.v11.packageBlock =>/svc/package-11 ;
/svc/package.v12.packageBlock =>/svc/package-12 ;
/svc/package.v13.packageBlock =>/svc/package-13 ;
/svc/package.v14.packageBlock =>/svc/package-14;
/svc/package.v15.packageBlock =>/svc/package-15;

I can’t update it, here is the curl answer:

Host: localhost:4180
User-Agent: curl/7.54.0
Accept: /
Content-Type: application/json
Content-Length: 1060
Expect: 100-continue

HTTP/1.1 100 Continue
HTTP/1.1 400 Bad Request
Content-Length: 0
HTTP error before end of send, stop sending

Closing connection 0

I am under the impression that the exact number of dtabs can change (here 16 is working - 17 is not ) but the fixed problem is the size of the request, as soon as I go over 1000 (bytes?) in Content-Length ?!
Cheers

If I understand correctly, we all agree that the number of dtabs isn’t a problem, but there might be another problem with large requests. If so, I suggest we close this thread and open a new one about the large request issue, so that the two things don’t get confused with each other.

All right, well I am not sure what was exactly wrong as you were saying @briansmith_buoyant, a large request when over 1000 bytes in json content, anyway this problem seems to be fixed with updating our kubectl and namerd version to:
namerd: 1.3.1 (same as linkerd)
kubectl: 1.7.2

Thanks for your time.
Cheers

Very interesting! Out of curiosity, what namerd and kubectl versions were you on before?

Glad it’s working now anyways.

I was using those version:
kubectl version: 1.4.0
Namerd version is : 0.9.1

So probably some buggy code changed in between, out of curiosity do you guys know how many dtabs can linkerd/namerd withstand ? Are we saying Hundreds or Thousands or Hundreds of Thousands OR Millions :slight_smile: ?

Cheers !

Interesting question! There’s only one way to find out… :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.