├── .gitignore ├── A1-http-connect-proxy-support.md ├── A10-avoid-grpclb-and-service-config-for-localhost-and-ip-literals.md ├── A14-channelz.md ├── A14_graphics ├── 1.png ├── 2.png └── 3.png ├── A15-promote-reflection.md ├── A16-binary-logging.md ├── A17-client-side-health-checking.md ├── A18-tcp-user-timeout.md ├── A2-service-configs-in-dns.md ├── A21-service-config-error-handling.md ├── A24-lb-policy-config.md ├── A26-grpclb-selection.md ├── A27-xds-global-load-balancing.md ├── A27_graphics ├── grpc_client_architecture.png └── grpc_client_architecture.svg ├── A28-xds-traffic-splitting-and-routing.md ├── A28_graphics └── grpc_xds_client_architecture.png ├── A29-xds-tls-security.md ├── A3-channel-tracing.md ├── A30-xds-v3.md ├── A31-xds-timeout-support-and-config-selector.md ├── A31_graphics ├── xds_architecture.png └── xds_architecture.svg ├── A32-xds-circuit-breaking.md ├── A33-Fault-Injection.md ├── A36-xds-for-servers.md ├── A37-xds-aggregate-and-logical-dns-clusters.md ├── A37_graphics ├── grpc_xds_client_architecture.png └── grpc_xds_client_architecture.svg ├── A38-admin-interface-api.md ├── A39-xds-http-filters.md ├── A40-csds-support.md ├── A41-xds-rbac.md ├── A42-xds-ring-hash-lb-policy.md ├── A42_graphics ├── grpc_xds_client_architecture_ring_hash.png ├── grpc_xds_client_architecture_ring_hash.svg ├── grpc_xds_client_architecture_round_robin.png └── grpc_xds_client_architecture_round_robin.svg ├── A43-grpc-authorization-api.md ├── A44-xds-retry.md ├── A44_graphics └── grpc_xds_retry_workflow.png ├── A45-retry-stats.md ├── A46-xds-nack-semantics-improvement.md ├── A47-xds-federation.md ├── A48-xds-least-request-lb-policy.md ├── A5-grpclb-in-dns.md ├── A50-xds-outlier-detection.md ├── A50_graphics ├── grpc_xds_client_architecture.png └── grpc_xds_client_architecture.svg ├── A51-custom-backend-metrics.md ├── A52-xds-custom-lb-policies.md ├── A52_graphics ├── grpc_xds_client_architecture.png └── grpc_xds_client_architecture.svg ├── A53-xds-ignore-resource-deletion.md ├── A54-restrict-control-plane-status-codes.md ├── A55-xds-stateful-session-affinity.md ├── A55_graphics ├── lb-hierarchy.png └── lb-hierarchy.svg ├── A56-priority-lb-policy.md ├── A57-xds-client-failure-mode-behavior.md ├── A58-client-side-weighted-round-robin-lb-policy.md ├── A59-audit-logging.md ├── A6-client-retries.md ├── A60-xds-stateful-session-affinity-weighted-clusters.md ├── A60_graphics ├── broken-affinity.png ├── broken-affinity.svg ├── final-diagram.png ├── race-condition.png └── race-condition.svg ├── A61-IPv4-IPv6-dualstack-backends.md ├── A62-pick-first.md ├── A63-xds-string-matcher-in-header-matching.md ├── A64-lrs-custom-metrics.md ├── A65-xds-mtls-creds-in-bootstrap.md ├── A66-otel-stats.md ├── A68-random-subsetting.md ├── A68_graphics ├── subsetting100-10-5.png ├── subsetting100-100-25.png ├── subsetting100-100-5.png ├── subsetting2000-10-5.png └── subsetting500-10-5.png ├── A69-crl-enhancements.md ├── A69_graphics ├── CrlApiCases.svg ├── CrlErrorScenarios.svg ├── CrlProviderDiagrams.svg ├── README.md ├── basic_diagram.png ├── basic_table.png ├── golang_reloader.png └── reloader_table.png ├── A6_graphics ├── StateDiagram.png ├── StateDiagram.svg ├── WhereRPCsFail.png ├── WhereRPCsFail.svg ├── WhereRetriesOccur.png ├── WhereRetriesOccur.svg ├── basic_hedge.png ├── basic_hedge.svg ├── basic_retry.png ├── basic_retry.svg ├── too_many_attempts.png ├── too_many_attempts.svg ├── transparent.png └── transparent.svg ├── A71-xds-fallback.md ├── A72-open-telemetry-tracing.md ├── A74-xds-config-tears.md ├── A74_graphics ├── grpc_client_architecture.png └── grpc_client_architecture.svg ├── A75-xds-aggregate-cluster-behavior-fixes.md ├── A75_graphics ├── grpc_client_architecture_aggregate.png ├── grpc_client_architecture_aggregate.svg ├── grpc_client_architecture_non_aggregate.png └── grpc_client_architecture_non_aggregate.svg ├── A76-ring-hash-improvements.md ├── A78-grpc-metrics-wrr-pf-xds.md ├── A79-non-per-call-metrics-architecture.md ├── A79_graphics ├── global-instruments-registry.png ├── global-stats-plugin-registry-usage.png └── stats-plugin-scoping.png ├── A8-client-side-keepalive.md ├── A81-xds-authority-rewriting.md ├── A82-xds-system-root-certs.md ├── A83-xds-gcp-authn-filter.md ├── A85-lrs-custom-metrics-changes.md ├── A86-xds-http-connect.md ├── A87-mtls-spiffe-support.md ├── A88-xds-data-error-handling.md ├── A89-backend-service-metric-label.md ├── A9-server-side-conn-mgt.md ├── A90-health-service-list-method.md ├── CODE-OF-CONDUCT.md ├── G1-true-binary-metadata.md ├── G2-http3-protocol.md ├── GOVERNANCE.md ├── GRFC-TEMPLATE.md ├── L1-cpp-stream-coalescing.md ├── L100-core-narrow-call-details.md ├── L101-core-remove-grpc_register_plugin.md ├── L102-cpp-version-macros.md ├── L103-core-move-insecure-creds-declaration.md ├── L104-core-ban-recv-with-send-status.md ├── L105-python-expose-new-error-types.md ├── L106-node-heath-check-library.md ├── L107-node-noop-start.md ├── L108-node-grpc-reflection-library.md ├── L109-node-server-unbind.md ├── L11-ruby-interceptors.md ├── L110-csharp-nullable-reference-types.md ├── L111-node-server-drain.md ├── L112-node-server-interceptors.md ├── L113-core-remove-num-external-connectivity-watchers.md ├── L114-node-server-connection-injection.md ├── L115-core-refactor-generic-service-stub.md ├── L116-core-loosen-max-pings-without-data.md ├── L117-core-replace-gpr-logging-with-abseil-logging.md ├── L118-core-remove-cronet.md ├── L119-python-add-typing-to-sync-api.md ├── L12-csharp-interceptors.md ├── L120-requiring-cpp17.md ├── L121-removing-core-cpp-public-hdrs ├── L122-core-remove-gpr_atm_no_barrier_clamped_add.md ├── L13-python-interceptors.md ├── L15-php-interceptors.md ├── L17-cpp-sync-server-exceptions.md ├── L18-core-remove-grpc-alarm.md ├── L2-cpp-completion-queue-creation-api.md ├── L21-core-gpr-review.md ├── L22-cpp-change-grpcpp-dir-name.md ├── L23-node-protobufjs-library.md ├── L24-cpp-extensible-api.md ├── L25-cpp-expose-buffer-reader-writer.md ├── L26-cpp-raw-codegen-api.md ├── L29-cpp-opencensus-filter.md ├── L30-cpp-control-max-threads-in-SyncServer.md ├── L31-php-intercetor-api-change.md ├── L32-node-channel-API.md ├── L33-node-checkServerIdentity-callback.md ├── L34-cpp-opencensus-span-api.md ├── L35-node-getAuthContext.md ├── L38-objc-api-upgrade.md ├── L38_graphics └── class_diagram.png ├── L39-core-remove-grpc-use-signal.md ├── L40-node-call-invocation-transformer.md ├── L41-node-server-async-bind.md ├── L42-python-metadata-flags.md ├── L43-node-type-info.md ├── L44-python-rich-status.md ├── L45-cpp-server-load-reporting.md ├── L46-python-compression-api.md ├── L48-node-metadata-options.md ├── L49-objc-flow-control.md ├── L5-node-client-interceptors.md ├── L50-objc-interceptor.md ├── L50_graphics └── overview.png ├── L51-java-rm-nano-proto.md ├── L52-core-static-method-host.md ├── L54-python-server-wait.md ├── L55-objc-global-interceptor.md ├── L55_graphics └── global-interceptor-chain.png ├── L56-objc-bazel-support.md ├── L56_graphics ├── dependency.png ├── hierarchy1.png └── hierarchy2.png ├── L57-csharp-new-major-version.md ├── L58-python-async-api.md ├── L59-core-allow-cppstdlib.md ├── L6-core-allow-cpp.md ├── L60-core-remove-custom-allocator.md ├── L62-core-call-credential-security-level.md ├── L63-core-call-credentials-debug-string.md ├── L63_graphics ├── c_call_creds_hierarchy.png ├── call_creds_class_hierarchy.png └── plugin_creds_codeflow.png ├── L64-python-runtime-proto-parsing.md ├── L65-python-package-name.md ├── L66-core-cancellation-status.md ├── L67-cpp-callback-api.md ├── L68-core-callback-api.md ├── L7-go-metadata-api.md ├── L70-node-proto-loader-type-generator.md ├── L72-core-google_default_credentials-extension.md ├── L73-java-binderchannel.md ├── L73-java-binderchannel └── wireformat.md ├── L74-java-channel-creds.md ├── L75-core-remove-grpc-channel-ping.md ├── L77-core-cpp-third-party-identity-support-for-call-credentials.md ├── L78-python-rich-server-context.md ├── L79-cpp-byte-buffer-slice-methods.md ├── L8-cpp-internalization.md ├── L80-cpp-async-response-reader-destruction.md ├── L84-cpp-call-failed-before-recv-message.md ├── L86-aspect-based-python-bazel-rules.md ├── L88-cpp-absl-status-conversions.md ├── L89-core-remove-grpc-insecure-channel-creation-api.md ├── L9-go-resolver-balancer-API.md ├── L91-improved-directory-support-for-python-bazel-rules.md ├── L92-dotnet-grpc-web.md ├── L93-node-securecontext-creds.md ├── L94-core-eliminate-slice-interning.md ├── L95-python-reflection-client.md ├── L96-csharp-load-balancing.md ├── L96_graphics └── diagram.png ├── L98-requiring-cpp14.md ├── L99-core-eliminate-corking.md ├── L9_graphics ├── bar_after.png └── bar_before.png ├── LICENSE ├── P1-cloud-native.md ├── P3-grfcs-for-core-api-changes.md ├── P4-grpc-cve-process.md ├── P5-jdk-version-support.md ├── P6-grpc-io-announce.md └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | *.DS_Store 2 | -------------------------------------------------------------------------------- /A10-avoid-grpclb-and-service-config-for-localhost-and-ip-literals.md: -------------------------------------------------------------------------------- 1 | Special case localhost and ip literals for service-config and grpclb-in-DNS 2 | ---- 3 | * Author(s): apolcyn 4 | * Approver: markdroth 5 | * Status: Draft 6 | * Implemented in: none currently, suited for all languages 7 | * Last updated: 2018-10-29 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/Ln6zeUDoo3M 9 | 10 | ## Abstract 11 | 12 | This document discusses a problem and solution to bring down channel 13 | setup latency for grpc clients that implement 14 | [grpclb-in-DNS](https://github.com/grpc/proposal/blob/master/A5-grpclb-in-dns.md) 15 | and 16 | [service-configs-in-DNS](https://github.com/grpc/proposal/blob/master/A2-service-configs-in-dns.md), 17 | in a special and degenerate case. 18 | 19 | ## Background 20 | 21 | Clients that support grpclb-in-DNS and service-configs-in-DNS must 22 | query for TXT and SRV records of `_grpclb._tcp.` and 23 | `_grpc_config.`, along with the A/AAAA records of 24 | ``, when creating a channel to ``. In the case 25 | that `` exists in the machines "hosts" file and 26 | grpclb-in-DNS/service-configs-in-DNS are not set up for `` (common 27 | when the target is "localhost" or an ipv4/v6 literal), 28 | channel setup can take much longer than desired due to the need 29 | to make these TXT and SRV lookups over the network and then 30 | wait for NXDOMAIN responses. This is particularly problematic 31 | for automated tests, which commonly create large numbers of client 32 | channels having targets of "localhost". 33 | 34 | While clients which don't implement grpclb-in-DNS or 35 | service-configs-in-DNS potentially only need to do a "hosts" file 36 | lookup when the target is "localhost", clients which do implement 37 | them add `lookup -> NXDOMAIN` RTT to the setup of every channel. 38 | 39 | Another problem which is unrelated to latency but which also caused 40 | by SRV and TXT record lookups is related to the way in which 41 | Java treats DNS resolution failures. Internally, gRPC-Java raises a type 42 | of fatal error when a DNS query fails. Normally, gRPC-Java internally 43 | catches these types of errors and carries on, but there are certain 44 | automated test environments in which the Java runtime is adjusted to 45 | crash upon seeing these types of errors. Normally, A/AAAA queries 46 | always succeed in these types of test environments because it's 47 | unexpected for the client to target anything other than "localhost", but 48 | SRV and TXT record lookups change things so that clients in these 49 | test environments almost always see failed DNS queries, and so they 50 | present a problem. 51 | 52 | ## Proposal 53 | 54 | This gRFC proposes that gRPC clients special case "localhost" and 55 | ipv4/v6 literals (e.g. "1.2.3.4" and "::1"). That is, this gRFC 56 | proposes that gRPC clients should not query for SRV or TXT 57 | records when the target is "localhost" or an ipv4/v6 literal, 58 | and thus that gRPC clients should not implement grpclb-in-DNS or 59 | service-configs-in-DNS for such targets. 60 | 61 | ## Rationale 62 | 63 | The latency problem is likely to be noticeable 64 | only for "localhost" and ipv4/ipv6 literal targets, because grpc clients 65 | connecting to targets which are not in a machine's "hosts" file and/or parseable as 66 | an ip literal need to reach out to DNS servers and wait for A/AAAA lookup time anyways 67 | (unless the grpc client is using a resolver that does client-side caching). This 68 | problem could perhaps be fixed by implementing DNS caches within grpc clients, but 69 | that would be more complex. 70 | 71 | The problem related to Java error handling in certain environments only practically 72 | affects gRPC-Java clients which target "localhost" or ip literals, and so this 73 | problem only needs to be dealt with for "localhost" and ip literals. 74 | 75 | Also note that while special-casing "localhost" seems wrong in principle 76 | because its special status is really a convention rather than an 77 | inherent property of the DNS protocol, the convention here is so strong 78 | that anyone violating it is likely to have many other problems, and we 79 | therefore think it's reasonable to have our code treat it specially. 80 | 81 | ## Implementation 82 | 83 | This should be done for all languages which implement 84 | grpclb-in-DNS and/or service-configs-in-DNS. 85 | -------------------------------------------------------------------------------- /A14_graphics/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A14_graphics/1.png -------------------------------------------------------------------------------- /A14_graphics/2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A14_graphics/2.png -------------------------------------------------------------------------------- /A14_graphics/3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A14_graphics/3.png -------------------------------------------------------------------------------- /A15-promote-reflection.md: -------------------------------------------------------------------------------- 1 | Promote Reflection 2 | ---- 3 | * Author(s): Carl Mastrangelo (carl-mastrangelo) 4 | * Approver: a11r 5 | * Status: Approved 6 | * Implemented in: 7 | * Last updated: 2018-06-13 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/AOgFISlAgxk 9 | 10 | ## Abstract 11 | 12 | Promote the Reflection Service from v1alpha to v1 13 | 14 | ## Background 15 | 16 | Reflection is a means by which a server can describe what messages it 17 | supports. Both the protocol and the [description]( 18 | https://github.com/grpc/grpc/blob/master/doc/server-reflection.md) of the 19 | reflection service have not changed for a long time. 20 | 21 | ## Proposal 22 | 23 | It is proposed that the package name (and corresponding directory structure) 24 | for [reflection.proto]( 25 | https://github.com/grpc/grpc/blob/v1.12.x/src/proto/grpc/reflection/v1alpha/reflection.proto) 26 | be changed from `grpc.reflection.v1alpha` to `grpc.reflection.v1`. The C++ 27 | reflection implementation will be copied with the new package path, while the 28 | old one will be deprecated. C# will also be copied with the new package path, 29 | deprecating the old version. Java and Go implementations of the gRPC 30 | reflection service should also be updated to match. Additionally, the 31 | canonical proto definition should be created in the 32 | [grpc-proto](https://github.com/grpc/grpc-proto) repository to serve as a source 33 | of truth. 34 | 35 | To facilitate the package name change, the new location and package of the 36 | proto will be created. This will involve copying the existing proto file to 37 | the new destination. All clients will be adapted to prefer the new service 38 | name. All servers will dual support both services for a release. Lastly, the 39 | old service will be deprecated and marked for removal in the near future. 40 | 41 | As of this writing, the only known users of refle 42 | 43 | ## Rationale 44 | 45 | It is unlikely that the reflection proto service will change in 46 | backwards-incompatible ways. The service has been implemented in each language 47 | implementation of gRPC and has not changed in two years. 48 | 49 | ## Implementation 50 | 51 | 1. Copy grpc/reflection/v1alpha/reflection.proto to 52 | grpc/reflection/v1/reflection.proto and update package name. 53 | 2. Update existing service implementations in each repo to support the new 54 | location in addition to the old location. 55 | 3. The old reflection.proto will be marked deprecated and for removal. 56 | 4. Clients (such as grpc_cli) will be updated to dual request reflection 57 | information, preferring the new location first. 58 | 5. In an upcoming release (e.g. v1.15.x), the new proto will be announced 59 | and the old proto will be declared for removal. 60 | 6. In the subsequent release (e.g. 1.16.x), the old implementations will be 61 | removed, clients updated to not dual request, and the old proto removed. 62 | 63 | -------------------------------------------------------------------------------- /A27_graphics/grpc_client_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A27_graphics/grpc_client_architecture.png -------------------------------------------------------------------------------- /A28_graphics/grpc_xds_client_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A28_graphics/grpc_xds_client_architecture.png -------------------------------------------------------------------------------- /A31_graphics/xds_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A31_graphics/xds_architecture.png -------------------------------------------------------------------------------- /A37_graphics/grpc_xds_client_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A37_graphics/grpc_xds_client_architecture.png -------------------------------------------------------------------------------- /A42_graphics/grpc_xds_client_architecture_ring_hash.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A42_graphics/grpc_xds_client_architecture_ring_hash.png -------------------------------------------------------------------------------- /A42_graphics/grpc_xds_client_architecture_round_robin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A42_graphics/grpc_xds_client_architecture_round_robin.png -------------------------------------------------------------------------------- /A44_graphics/grpc_xds_retry_workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A44_graphics/grpc_xds_retry_workflow.png -------------------------------------------------------------------------------- /A50_graphics/grpc_xds_client_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A50_graphics/grpc_xds_client_architecture.png -------------------------------------------------------------------------------- /A52_graphics/grpc_xds_client_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A52_graphics/grpc_xds_client_architecture.png -------------------------------------------------------------------------------- /A55_graphics/lb-hierarchy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A55_graphics/lb-hierarchy.png -------------------------------------------------------------------------------- /A60_graphics/broken-affinity.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A60_graphics/broken-affinity.png -------------------------------------------------------------------------------- /A60_graphics/final-diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A60_graphics/final-diagram.png -------------------------------------------------------------------------------- /A60_graphics/race-condition.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A60_graphics/race-condition.png -------------------------------------------------------------------------------- /A63-xds-string-matcher-in-header-matching.md: -------------------------------------------------------------------------------- 1 | A63: xDS StringMatcher in Header Matching 2 | ---- 3 | * Author(s): @markdroth 4 | * Approver: @ejona86 5 | * Status: {Draft, In Review, Ready for Implementation, Implemented} 6 | * Implemented in: 7 | * Last updated: 2023-05-03 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/dJFBiMzs6C0 9 | 10 | ## Abstract 11 | 12 | gRPC will add support for the `StringMatcher` field in xDS header matching. 13 | 14 | ## Background 15 | 16 | gRPC introduced support for xDS routing in [gRFC A28][A28]. Since that 17 | feature was implemented, however, a new `StringMatcher` field was added 18 | to the xDS `HeaderMatcher` proto in 19 | https://github.com/envoyproxy/envoy/pull/17119. This provides a more 20 | general-purpose matching API and adds the ability to make matches in a 21 | case-insensitive way. 22 | 23 | This proposal updates gRPC to support this new field. 24 | 25 | ### Related Proposals: 26 | * [gRFC A28: xDS Traffic Splitting and Routing][A28] 27 | * [gRFC A41: xDS RBAC Support][A41] 28 | 29 | ## Proposal 30 | 31 | gRPC will support the [`HeaderMatcher.string_match` field][new_xds_field]. 32 | Note that this field is part of a `oneof`, so it is an alternative to 33 | the existing fields that gRPC already supports. 34 | 35 | The new field provides a superset of the functionality of the existing 36 | fields `exact_match`, `safe_regex_match`, `prefix_match`, `suffix_match`, 37 | and `contains_match`. Those fields are marked as deprecated in the 38 | xDS proto. However, those fields are still commonly used, so gRPC will 39 | continue to support them for the foreseeable future. 40 | 41 | The new field provides one additional feature over the old fields, which 42 | is the ability to ignore case in matches via the [`ignore_case` 43 | field](https://github.com/envoyproxy/envoy/blob/3fe4b8d335fa339ef6f17325c8d31f87ade7bb1a/api/envoy/type/matcher/v3/string.proto#L69). 44 | Note that this option is ignored for regex matches. 45 | 46 | Note that gRPC has existing code to support the `StringMatcher` proto as 47 | part of supporting RBAC, as specified in [gRFC A41][A41]. If possible, 48 | gRPC implementations should provide common code for evaluating header 49 | matches that can be shared between the two features. 50 | 51 | ### Temporary environment variable protection 52 | 53 | No environment variable protection is proposed for this feature, since 54 | it's a simple extension of existing header matching functionality. Unit 55 | test coverage should be sufficient to ensure that the new fields are 56 | handled correctly. 57 | 58 | ## Rationale 59 | 60 | N/A 61 | 62 | ## Implementation 63 | 64 | Implemented in C-core in https://github.com/grpc/grpc/pull/32993. 65 | 66 | ## Open issues (if applicable) 67 | 68 | N/A 69 | 70 | [A28]: A28-xds-traffic-splitting-and-routing.md 71 | [A41]: A41-xds-rbac.md 72 | [new_xds_field]: https://github.com/envoyproxy/envoy/blob/3fe4b8d335fa339ef6f17325c8d31f87ade7bb1a/api/envoy/config/route/v3/route_components.proto#L2280 73 | -------------------------------------------------------------------------------- /A64-lrs-custom-metrics.md: -------------------------------------------------------------------------------- 1 | A64: xDS LRS Custom Metrics Support 2 | ---- 3 | * Author: yousukseung 4 | * Approver(s): markdroth 5 | * Status: Implemented 6 | * Implemented in: C-core, Java, Go 7 | * Last updated: 2023-05-10 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/Cs7ffkO1wUA 9 | * Updated by: [A85: Changes to xDS LRS Custom Metrics Support](A85-lrs-custom-metrics-changes.md) 10 | 11 | ## Abstract 12 | 13 | This proposal describes how the gRPC xDS client will support custom metrics from the backends in load reports to the LRS server at the locality level. 14 | 15 | ## Background 16 | 17 | The initial xDS functionality in the gRPC client described in [gRFC A27][A27] includes the ability to report load to the control plane via LRS. However, those load reports currently contain only basic request counts tracked by the client. [gRFC A51][A51] added support for backends to report custom backend metrics to the client via the [ORCA protocol][ORCA]. This proposal adds the ability for the client to propagate those backend metrics to the control plane in the LRS load reports. 18 | 19 | ### Related Proposals: 20 | * [A27: xDS-Based Global Load Balancing][A27] 21 | * [A51: Custom Backend Metrics Support][A51] 22 | 23 | ## Proposal 24 | 25 | ### ORCA Integration 26 | 27 | The gRPC xDS client will include the entire `named_metrics` field in ORCA load reports from the backend. They will be considered application specific opaque values with no validation. 28 | 29 | ### Per-Request Support Only 30 | 31 | Custom metrics in LRS load reports will only include `named_metrics` from per-request ORCA load reports and not OOB load reports. See [gRFC A51][A51] for more on per-request and OOB load reports. 32 | 33 | ### Aggregation 34 | 35 | Each per-request ORCA load report will be associated with one request for the aggregation purpose. The value and count of each entry in `named_metrics` will be aggregated separately. All values will be considered cumulative and will be aggregated using addition. They will be tracked at the locality level. 36 | 37 | For example, following ORCA load reports 38 | ```textproto 39 | // report 1/3 40 | named_metrics { key: "key1" value: 1.0 } 41 | named_metrics { key: "key2" value: 2.0 } 42 | // report 2/3 43 | named_metrics { key: "key2" value: 3.0 } 44 | named_metrics { key: "key3" value: 4.0 } 45 | // report 3/3 46 | // (no named_metrics) 47 | ``` 48 | will be aggreated to as follows. 49 | |field|value|number of requests| 50 | |------|---|---| 51 | |`key1`|1.0|1| 52 | |`key2`|5.0|2| 53 | |`key3`|4.0|1| 54 | 55 | ### LRS Load Report 56 | 57 | The gRPC xDS client will include custom metrics in the `load_metric_stats` field in the locality stats. Each LRS report will include aggregated custom metrics with keys reported since the last report. Locally aggregated stats will be cleared and the associated total request counts will be reset to zero after each LRS load report is generated. 58 | 59 | Continuing the example from the previous section, here is how the custom metric data will appear in the LRS load report: 60 | ```textproto 61 | // LRS report (envoy.service.load_stats.v3.LoadStatsRequest) 62 | cluster_stats { 63 | // … 64 | upstream_locality_stats { 65 | load_metric_stats { 66 | metric_name: "key1" 67 | num_requests_finished_with_metric: 1 68 | total_metric_value: 1.0 69 | } 70 | load_metric_stats { 71 | metric_name: "key2" 72 | num_requests_finished_with_metric: 2 73 | total_metric_value: 5.0 74 | } 75 | load_metric_stats { 76 | metric_name: "key3" 77 | num_requests_finished_with_metric: 1 78 | total_metric_value: 4.0 79 | } 80 | } 81 | } 82 | ``` 83 | ### xDS Integration 84 | 85 | The `xds_cluster_impl` LB policy, which is already tracking call status for the client-tracked request counts, will be changed to also get the per-request backend metric data reported by the backend. It will report that data to the `XdsClient` along with the request counts. The `XdsClient` will perform aggregation of the data and include it in LRS load reports. 86 | 87 | ## Implementation 88 | 89 | This is implemented in C-core with [#32690][PR_32690]. 90 | This is implemented in Java with [#10282][PR_10282]. 91 | This is implemented in Go with [#7027][PR_7027]. 92 | 93 | [A27]: https://github.com/grpc/proposal/blob/master/A27-xds-global-load-balancing.md 94 | [A51]: https://github.com/grpc/proposal/blob/master/A51-custom-backend-metrics.md 95 | [ORCA]: https://github.com/envoyproxy/envoy/issues/6614 96 | [PR_32690]: https://github.com/grpc/grpc/pull/32690 97 | [PR_10282]: https://github.com/grpc/grpc-java/pull/10282 98 | [PR_7027]: https://github.com/grpc/grpc-go/pull/7027 99 | -------------------------------------------------------------------------------- /A65-xds-mtls-creds-in-bootstrap.md: -------------------------------------------------------------------------------- 1 | A65: mTLS Credentials in xDS Bootstrap File 2 | ---- 3 | * Author(s): @markdroth 4 | * Approver: @ejona86 5 | * Status: {Draft, In Review, Ready for Implementation, Implemented} 6 | * Implemented in: 7 | * Last updated: 2023-05-24 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/h_LQnTimyt4 9 | 10 | ## Abstract 11 | 12 | This proposal adds support for configuring the use of mTLS for 13 | communicating with the xDS server via the xDS bootstrap file. 14 | 15 | ## Background 16 | 17 | [gRFC A27][A27] defines the xDS bootstrap file format for setting 18 | the channel creds used to communicate with the xDS server, with the 19 | initial options of `google_default` and `insecure`. It suggested that a 20 | general-purpose mechanism could be added to configure arbitrary channel 21 | creds types, which has subsequently been done, albeit not intended as 22 | a public API. 23 | 24 | [gRFC A29][A29] describes the certificate provider framework used for 25 | configuring mTLS for data plane communication in xDS. It also defines 26 | the file-watcher certificate provider. 27 | 28 | This proposal defines a new channel creds type to be used in the 29 | bootstrap file for using mTLS with the xDS server, leveraging the 30 | functionality of the file-watcher certificate provider. 31 | 32 | ### Related Proposals: 33 | * [gRFC A27: xDS-Based Global Load Balancing][A27] 34 | * [gRFC A29: xDS-Based Security for gRPC Clients and Servers][A29] 35 | 36 | ## Proposal 37 | 38 | We will define a new credential type in the bootstrap file called `tls`. 39 | Its configuration will be essentially the same as that of the file-watcher 40 | certificate provider described in [gRFC A29][A29]. Specifically, the 41 | config will look like this: 42 | 43 | ```json 44 | { 45 | // Path to CA certificate file. 46 | // If unset, system-wide root certs are used. 47 | "ca_certificate_file": , 48 | 49 | // Paths to identity certificate file and private key file. 50 | // If either of these fields are set, both must be set. 51 | // If set, mTLS will be used; if unset, normal TLS will be used. 52 | "certificate_file": , 53 | "private_key_file": , 54 | 55 | // How often to re-read the certificate files. 56 | // Value is the JSON format described for a google.protobuf.Duration 57 | // message in https://protobuf.dev/programming-guides/proto3/#json. 58 | // If unset, defaults to "600s". 59 | "refresh_interval": 60 | } 61 | ``` 62 | 63 | The only difference between the file-watcher certificate provider config 64 | and this one is that in the file-watcher certificate provider, at least 65 | one of the "certificate_file" or "ca_certificate_file" fields must be 66 | specified, whereas in this configuration, it is acceptable to specify 67 | neither one. 68 | 69 | Implementations should be able to internally configure the use of the 70 | file-watcher certificate provider for the certificate-reloading 71 | functionality. 72 | 73 | ### Temporary environment variable protection 74 | 75 | This feature is not enabled via remote I/O, and we don't have a good way 76 | to interop test it, so we will not use env var protection for this feature. 77 | Unit tests in individual languages should be sufficient to verify the 78 | functionality. 79 | 80 | ## Rationale 81 | 82 | We considered phrasing the credential config in a way that exposes the 83 | certificate provider framework directly, since that would have allowed it 84 | to automatically support any new cert provider implementations we may 85 | add in the future. However, the plumbing for that turned out to be a 86 | bit challenging, so we decided to only directly expose the file-watcher 87 | certificate provider mechanism. If we add other certificate providers 88 | in the future, we can consider adding fields to expose them in this 89 | configuration. 90 | 91 | ## Implementation 92 | 93 | Implemented in C-core in https://github.com/grpc/grpc/pull/33234. 94 | 95 | ## Open issues (if applicable) 96 | 97 | N/A 98 | 99 | [A27]: A27-xds-global-load-balancing.md 100 | [A29]: A29-xds-tls-security.md 101 | -------------------------------------------------------------------------------- /A68_graphics/subsetting100-10-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A68_graphics/subsetting100-10-5.png -------------------------------------------------------------------------------- /A68_graphics/subsetting100-100-25.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A68_graphics/subsetting100-100-25.png -------------------------------------------------------------------------------- /A68_graphics/subsetting100-100-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A68_graphics/subsetting100-100-5.png -------------------------------------------------------------------------------- /A68_graphics/subsetting2000-10-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A68_graphics/subsetting2000-10-5.png -------------------------------------------------------------------------------- /A68_graphics/subsetting500-10-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A68_graphics/subsetting500-10-5.png -------------------------------------------------------------------------------- /A69_graphics/README.md: -------------------------------------------------------------------------------- 1 | CrlApiCases.svg and CrlErrorScenarios.svg can be imported into LucidChart 2 | CrlProviderDiagrams.svg can be imported into draw.io -------------------------------------------------------------------------------- /A69_graphics/basic_diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A69_graphics/basic_diagram.png -------------------------------------------------------------------------------- /A69_graphics/basic_table.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A69_graphics/basic_table.png -------------------------------------------------------------------------------- /A69_graphics/golang_reloader.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A69_graphics/golang_reloader.png -------------------------------------------------------------------------------- /A69_graphics/reloader_table.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A69_graphics/reloader_table.png -------------------------------------------------------------------------------- /A6_graphics/StateDiagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A6_graphics/StateDiagram.png -------------------------------------------------------------------------------- /A6_graphics/WhereRPCsFail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A6_graphics/WhereRPCsFail.png -------------------------------------------------------------------------------- /A6_graphics/WhereRetriesOccur.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A6_graphics/WhereRetriesOccur.png -------------------------------------------------------------------------------- /A6_graphics/basic_hedge.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A6_graphics/basic_hedge.png -------------------------------------------------------------------------------- /A6_graphics/basic_retry.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A6_graphics/basic_retry.png -------------------------------------------------------------------------------- /A6_graphics/too_many_attempts.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A6_graphics/too_many_attempts.png -------------------------------------------------------------------------------- /A6_graphics/transparent.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A6_graphics/transparent.png -------------------------------------------------------------------------------- /A74_graphics/grpc_client_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A74_graphics/grpc_client_architecture.png -------------------------------------------------------------------------------- /A75_graphics/grpc_client_architecture_aggregate.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A75_graphics/grpc_client_architecture_aggregate.png -------------------------------------------------------------------------------- /A75_graphics/grpc_client_architecture_non_aggregate.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A75_graphics/grpc_client_architecture_non_aggregate.png -------------------------------------------------------------------------------- /A79_graphics/global-instruments-registry.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A79_graphics/global-instruments-registry.png -------------------------------------------------------------------------------- /A79_graphics/global-stats-plugin-registry-usage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A79_graphics/global-stats-plugin-registry-usage.png -------------------------------------------------------------------------------- /A79_graphics/stats-plugin-scoping.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/A79_graphics/stats-plugin-scoping.png -------------------------------------------------------------------------------- /A82-xds-system-root-certs.md: -------------------------------------------------------------------------------- 1 | A82: xDS System Root Certificates 2 | ---- 3 | * Author(s): @markdroth 4 | * Approver: @ejona86, @dfawley 5 | * Status: {Draft, In Review, Ready for Implementation, Implemented} 6 | * Implemented in: 7 | * Last updated: 2024-07-08 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/BgqeUU0q4fU 9 | 10 | ## Abstract 11 | 12 | We will add a new xDS option to use the system's default root 13 | certificates for TLS certificate validation. 14 | 15 | ## Background 16 | 17 | Most service mesh workloads use mTLS, as described in [gRFC A29][A29]. 18 | However, there are cases where it is useful for applications to use 19 | normal TLS rather than using certificates for workload identity, such as 20 | when a mesh wants to move some workloads behind a reverse proxy. 21 | 22 | gRPC already has code to find the system root certificates on various 23 | platforms. However, there is currently no way for the xDS control plane 24 | to tell the client to use that functionality, at least not without the 25 | cumbersome setup of duplicating that functionality in a certificate 26 | provider config in the xDS bootstrap file. 27 | 28 | ### Related Proposals: 29 | * [gRFC A29: xDS mTLS Security][A29] 30 | 31 | [A29]: A29-xds-tls-security.md 32 | 33 | ## Proposal 34 | 35 | We have added a [`system_root_certs` 36 | field](https://github.com/envoyproxy/envoy/blob/84d8fdd11e78013cd50596fa3b704e152512455e/api/envoy/extensions/transport_sockets/tls/v3/common.proto#L399) 37 | to the xDS `CertificateValidationContext` message (see 38 | envoyproxy/envoy#34235). In the gRPC client, if this field is present 39 | and the `ca_certificate_provider_instance` field is unset, system root 40 | certificates will be used for validation. 41 | 42 | ### xDS Resource Validation 43 | 44 | When processing a CDS resource, we will look at this new field if 45 | `ca_certificate_provider_instance` is unset. The parsed CDS resource 46 | delivered to the XdsClient watcher will indicate if system root certs 47 | should be used. If feasible, the parsed representation should be 48 | structured such that it is not possible to indicate both a certificate 49 | provider instance and using system root certs, since those options are 50 | mutually exclusive. 51 | 52 | The new `system_root_certs` field will not be supported on the gRPC 53 | server side. If `ca_certificate_provider_instance` is unset and 54 | `system_root_certs` is set, the LDS resource will be NACKed. 55 | 56 | ### xds_cluster_impl LB Policy Changes 57 | 58 | The xds_cluster_impl LB policy sets the configuration for the XdsCreds 59 | functionality based on the CDS resource. We will modify it such that if 60 | the CDS resource indicates that system root certs are to be used, it 61 | will configure XdsCreds to use system root certs. 62 | 63 | ### XdsCredentials Changes 64 | 65 | The XdsCredentials code will be modified such that if it is configured 66 | to use system root certs, it will configure the TlsCreds code to do that. 67 | 68 | ### Temporary environment variable protection 69 | 70 | Use of the `system_root_certs` field in CDS and LDS will be guarded 71 | by the `GRPC_EXPERIMENTAL_XDS_SYSTEM_ROOT_CERTS` env var. The env var 72 | guard will be removed once the feature passes interop tests. 73 | 74 | ## Rationale 75 | 76 | We already have code in gRPC to find the system root certs for various 77 | platforms. We don't want to have to reproduce that functionality in a 78 | cert provider impl. 79 | 80 | ## Implementation 81 | 82 | C-core implementation in https://github.com/grpc/grpc/pull/37185. 83 | 84 | Will also be implemented in Java, Go, and Node. 85 | -------------------------------------------------------------------------------- /A90-health-service-list-method.md: -------------------------------------------------------------------------------- 1 | A90: Add List Method to gRPC Health Service 2 | ---- 3 | 4 | * **Author(s):** @marcoshuck 5 | * **Approver:** @markdroth 6 | * **Status:** Final 7 | * **Implemented in:** - 8 | * **Last updated:** 2025-03-10 9 | * **Discussion at:** https://groups.google.com/g/grpc-io/c/tEI5G9sX0zc 10 | 11 | ## Abstract 12 | 13 | This proposal introduces a new `List` RPC method for the Health service, allowing clients to retrieve the statuses of 14 | all monitored services. This feature simplifies integration with status-reporting dashboards and enhances observability 15 | for microservices. 16 | 17 | ## Background 18 | 19 | The [existing Health service](https://github.com/grpc/grpc-proto/blob/cbb231341938471b78b38729c2e4a712a9e098d0/grpc/health/v1/health.proto) 20 | provides basic health check functionality but lacks a mechanism to retrieve a comprehensive list of all monitored 21 | services and their statuses. 22 | 23 | This limitation makes it challenging for clients to aggregate health information across multiple services, particularly 24 | for use cases like publishing service statuses to dashboards such as [Cachet](https://cachethq.io/) 25 | or [Statuspage](https://www.atlassian.com/software/statuspage). 26 | 27 | It's important to keep in mind that the list of health services exposed by an application can change over the lifetime 28 | of the process. 29 | 30 | Kubernetes provides a similar capability with its `/readyz?verbose` endpoint, which lists the status of all components. 31 | This proposal aims to bring analogous functionality to the Health service, providing a unified view of service health. 32 | 33 | ### Related Proposals 34 | 35 | - [A17 - Client-Side Health Checking](A17-client-side-health-checking.md) 36 | 37 | ## Proposal 38 | 39 | Introduce a new `List` RPC method in the Health service with the following features: 40 | 41 | - Retrieve the health statuses of all monitored services. 42 | - Ensure the implementation is idempotent and side-effect free. 43 | - Provide a clear schema for the request and response to facilitate integration with external tools. 44 | 45 | ### Proposed API Changes 46 | 47 | ```protobuf 48 | message HealthListRequest { 49 | } 50 | 51 | message HealthListResponse { 52 | map statuses = 1; // Contains all the services and their respective status. 53 | } 54 | 55 | 56 | service Health { 57 | // List provides a non-atomic snapshot of the health of all the available services. 58 | // 59 | // The maximum number of services to return is 100; responses exceeding this limit will result in a RESOURCE_EXHAUSTED 60 | // error. 61 | // 62 | // Clients should set a deadline when calling List, and can declare the 63 | // server unhealthy if they do not receive a timely response. 64 | // 65 | // Clients should keep in mind that the list of health services exposed by an application 66 | // can change over the lifetime of the process. 67 | // 68 | // List implementations should be idempotent and side effect free. 69 | rpc List(HealthListRequest) returns (HealthListResponse); 70 | } 71 | ``` 72 | 73 | ### Temporary environment variable protection 74 | 75 | N/A 76 | 77 | ## Rationale 78 | 79 | Adding the List RPC method to the Health service strikes a balance between functionality and simplicity. 80 | 81 | ### Alternative approaches 82 | 83 | - Separate Service: Extract the new functionality into a standalone service. This avoids altering the existing Health 84 | service API but increases complexity by introducing another service for clients to interact with. 85 | 86 | ## Implementation 87 | 88 | 1. Add the `List` RPC method and the associated request/response messages in the Health service `.proto` file. 89 | 2. Implement `List` RPC method in the respective gRPC languages. 90 | 3. Update client libraries to support the `List` method. Provide examples demonstrating how to call the method and 91 | handle its response. 92 | 4. Update API documentation to describe the new method, its use cases, and sample requests/responses. 93 | 94 | ## Open issues (if applicable) 95 | 96 | N/A 97 | -------------------------------------------------------------------------------- /CODE-OF-CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Community Code of Conduct 2 | 3 | gRPC follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). 4 | -------------------------------------------------------------------------------- /G1-true-binary-metadata.md: -------------------------------------------------------------------------------- 1 | True Binary Metadata 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: ejona 5 | * Status: Draft 6 | * Implemented in: n/a 7 | * Last updated: 3/29/17 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/Ww8waYz_Nes 9 | 10 | ## Abstract 11 | 12 | Add a HTTP2 extension to allow binary metadata (those suffixed with -bin) to be 13 | sent without the base64 encode/decode step. 14 | 15 | ## Background 16 | 17 | gRPC allows binary metadata to be sent by applications. These metadata elements 18 | are keyed with a -bin suffix. When transmitted on the wire, since HTTP2 does not 19 | allow binary headers, we base64 encode these elements and Huffman compress them. 20 | On receipt, we transparently reverse the transformation. 21 | 22 | This transformation is costly in terms of CPU, and there exist use cases where 23 | this transformation can become the CPU bottleneck for gRPC. 24 | 25 | ### Related Proposals: 26 | 27 | n/a 28 | 29 | ## Proposal 30 | 31 | ### New setting 32 | 33 | Expose a custom setting in our HTTP2 settings exchange: 34 | GRPC_ALLOW_TRUE_BINARY_METADATA = 0xfe03. 35 | 36 | This setting is randomly chosen (to avoid conflicts with other extensions), and 37 | within the experimental range of HTTP extensions (see https://tools.ietf.org/html/rfc7540#section-11.3). 38 | 39 | The setting can have the values 0 (default) or 1. If the setting is 1, then 40 | peers MAY use a 'true binary' encoding (described below), instead of the current 41 | base64 encoding for -bin metadata. 42 | 43 | Implementations SHOULD transmit this setting only once, and as part of the first 44 | settings frame. 45 | 46 | ### 'True binary' encoding 47 | 48 | When transmitting metadata on a connection where the peer has specified 49 | GRPC_ALLOW_TRUE_BINARY_METADATA, instead of encoding using base64, an 50 | implementation MAY instead prefix a NUL byte to the metadata and transmit the 51 | metadata value in binary form. 52 | 53 | Since this is a HTTP2 extension and other extensions might alias this extension 54 | id, it's possible that this becomes misconfigured. In that case, peers are 55 | required to RST_STREAM with http error PROTOCOL_ERROR (as required by https://tools.ietf.org/html/rfc7540#section-10.3). 56 | If a binary encoding was attempted and such a RST_STREAM is received without any other headers, 57 | implementations SHOULD retry the request with base64 encoding, and disable 58 | binary encoding for future requests. Verbosely logging this condition is 59 | encouraged. 60 | 61 | ### Examples 62 | 63 | Suppose we wanted to send metadata element 'foo-bin: 0x01' (ie a single byte 64 | containing '1'). 65 | 66 | Under base64, we'd send a http header 'foo-bin: AQ'. 67 | Under binary, we'd send 'foo-bin: 0x00 0x01' (ie prefixing a NUL byte and then 68 | sending the binary metadata value.) 69 | 70 | ## Rationale 71 | 72 | Binary metadata transmission performance is critical for a number of 73 | applications. 74 | 75 | Various workarounds were considered: 76 | 1. Switching to base16 - this would require a backwards incompatible protocol 77 | change for gRPC, and has the disadvantage of bloating wire size, which 78 | additionally interacts badly with hpack. 79 | 2. Adding a new suffix (-raw): this leaks further implementation details to 80 | application developers, and likely would still need a base64 workaround 81 | 82 | ## Implementation 83 | 84 | A trial implementation in gRPC C core is underway and expected to be ready in 85 | coming weeks. 86 | 87 | ## Open issues (if applicable) 88 | 89 | n/a 90 | -------------------------------------------------------------------------------- /GOVERNANCE.md: -------------------------------------------------------------------------------- 1 | This repository is governed by the gRPC organization's [governance rules](https://github.com/grpc/grpc-community/blob/master/governance.md). 2 | -------------------------------------------------------------------------------- /GRFC-TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Title 2 | ---- 3 | * Author(s): [Author Name, Co-Author Name ...] 4 | * Approver: a11r 5 | * Status: {Draft, In Review, Ready for Implementation, Implemented} 6 | * Implemented in: 7 | * Last updated: [Date] 8 | * Discussion at: (filled after thread exists) 9 | 10 | ## Abstract 11 | 12 | [A short summary of the proposal.] 13 | 14 | ## Background 15 | 16 | [An introduction of the necessary background and the problem being solved by the proposed change.] 17 | 18 | 19 | ### Related Proposals: 20 | * A list of proposals this proposal builds on or supersedes. 21 | 22 | ## Proposal 23 | 24 | [A precise statement of the proposed change.] 25 | 26 | ### Temporary environment variable protection 27 | 28 | [Name the environment variable(s) used to enable/disable the feature(s) this proposal introduces and their default(s). Generally, features that are enabled by I/O should include this type of control until they have passed some testing criteria, which should also be detailed here. This section may be omitted if there are none.] 29 | 30 | ## Rationale 31 | 32 | [A discussion of alternate approaches and the trade offs, advantages, and disadvantages of the specified approach.] 33 | 34 | 35 | ## Implementation 36 | 37 | [A description of the steps in the implementation, who will do them, and when. If a particular language is going to get the implementation first, this section should list the proposed order.] 38 | 39 | ## Open issues (if applicable) 40 | 41 | [A discussion of issues relating to this proposal for which the author does not know the solution. This section may be omitted if there are none.] 42 | -------------------------------------------------------------------------------- /L1-cpp-stream-coalescing.md: -------------------------------------------------------------------------------- 1 | # C++ streaming coalescing API's 2 | 3 | * Author(s): ctiller 4 | * Approver: a11r 5 | * Status: Draft 6 | * Implemented in: C++ 7 | * Last updated: 2017/01/13 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/GfxB7nuV9GY 9 | 10 | ## Abstract 11 | 12 | Provide C++ API's to coalesce: 13 | - initial metadata and first streaming message 14 | - final streaming message and trailing metadata 15 | into the same core batch. 16 | 17 | ## Background 18 | 19 | Currently when making a streaming call using gRPC and sending a single message, 20 | three core batches are created: one for initial metadata, one for the message 21 | itself, and one for trailing metadata. 22 | 23 | Since the underlying transport has no knowledge that future messages are coming, 24 | it's forced to assume it should initiate writes immediately for each of them. 25 | In typical cases, this results in three sendmsg syscalls per stream. 26 | 27 | The current API allows some coalescing via WriteOptions. One can do: 28 | ```c++ 29 | auto s = stub->Foo(...); 30 | s->Write(first_message, WriteOptions().set_buffer_hint()); 31 | s->WritesDone(); 32 | ``` 33 | This is difficult to discover, and further more it performs two round-trips 34 | through the C Core stack, where it would be possible to do one. 35 | 36 | ### Related Proposals: 37 | N/A 38 | 39 | ## Proposal 40 | 41 | ### Expand WriteOptions to include an end-of-stream bit 42 | ```c++ 43 | class WriteOptions { 44 | public: 45 | // ... existing interface ... 46 | 47 | // corked bit: aliases set_buffer_hint currently, with the intent that 48 | // set_buffer_hint will be removed in the future 49 | WriteOptions& set_corked(); 50 | WriteOptions& clear_corked(); 51 | bool is_corked(); 52 | 53 | // last-message bit: indicates this is the last message in a stream 54 | // client-side: makes Write the equivalent of performing Write, WritesDone in 55 | // a single step 56 | // server-side: hold the Write until the service handler returns (sync api) 57 | // or until Finish is called (async api) 58 | WriteOptions& set_last_message(); 59 | WriteOptions& clear_last_message(); 60 | bool is_last_message() const; 61 | }; 62 | ``` 63 | 64 | ### Add a convenience WriteLast method to ClientWriterInterface, ClientReaderWriterInterface, ServerWriterInterface, ServerReaderWriterInterface 65 | ```c++ 66 | // Perform Write, WritesDone in a single step 67 | void WriteLast(const W& msg, WriteOptions options) { 68 | Write(msg, options.set_last_message()); 69 | } 70 | ``` 71 | 72 | ### Add a convenience WriteLast method to ClientAsyncWriterInterface, ClientAsyncReaderWriterInterface, ServerAsyncWriterInterface, ServerAsyncReaderWriterInterface 73 | ```c++ 74 | // Perform Write, WritesDone in a single step 75 | void WriteLast(const W& msg, WriteOptions options, void* tag) { 76 | Write(msg, options.set_last_message(), tag); 77 | } 78 | ``` 79 | 80 | This will require finally exposing WriteOptions to async code also: 81 | ```c++ 82 | void Write(const W& msg, WriteOptions options, void* tag); 83 | ``` 84 | 85 | ### Add a WriteAndFinish method to ServerAsyncWriterInterface, ServerAsyncReaderWriterInterface 86 | ```c++ 87 | // Perform Write, Finish in a single step 88 | void WriteAndFinish(const W& msg, WriteOptions options, const Status& status void* tag); 89 | ``` 90 | 91 | ### Expand ClientContext to allow corking metadata 92 | ```c++ 93 | class ClientContext { 94 | public: 95 | // ... 96 | 97 | // flag that metadata should be corked (and not sent until the first message 98 | // is sent 99 | void set_initial_metadata_corked(bool corked); 100 | }; 101 | ``` 102 | 103 | ## Rationale 104 | 105 | The last-message bit provides an avenue for our C++ wrapper to form a C core 106 | batch that contains both a send message and a half close. 107 | 108 | The WriteLast and WriteAndFinish methods, in and additional Stub stream 109 | constructors provide first class discoverability to these API's. Importantly 110 | code completion tools should offer them as suggestions to new developers, and 111 | they'll appear in top-level documentation. 112 | 113 | The ClientContext change allows a C++ implementation that forms a C core batch 114 | containing both initial metadata and the first message. 115 | 116 | By combining a corked initial metadata and WriteLast, clients can coalesce all 117 | of initial metadata, only message send, and half close. 118 | 119 | ## Implementation 120 | 121 | This should be straightforwardly implementable in the C++ layer. 122 | -------------------------------------------------------------------------------- /L100-core-narrow-call-details.md: -------------------------------------------------------------------------------- 1 | L100: C-Core Narrow grpc_call_details 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: markdroth 5 | * Status: In Review 6 | * Implemented in: C Core 7 | * Last updated: [Date] 8 | * Discussion at: (filled after thread exists) 9 | 10 | ## Abstract 11 | 12 | Remove unused fields from grpc_call_details. 13 | 14 | ## Background 15 | 16 | grpc_call_details contains two fields that are both always zero. 17 | Remove them and the need to test that they are so. 18 | 19 | 20 | ### Related Proposals: 21 | 22 | None. 23 | 24 | ## Proposal 25 | 26 | From grpc_call_details: 27 | - Remove the field `flags`. 28 | - Remove the field `reserved`. 29 | 30 | ## Rationale 31 | 32 | These fields must always be set to zero and provide no information. 33 | 34 | ## Implementation 35 | 36 | Implemented as part of https://github.com/grpc/grpc/pull/30444. 37 | -------------------------------------------------------------------------------- /L101-core-remove-grpc_register_plugin.md: -------------------------------------------------------------------------------- 1 | L101: C-Core Remove grpc_register_plugin 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: markdroth 5 | * Status: In Review 6 | * Implemented in: C Core 7 | * Last updated: 2022/09/04 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/mFpiC_LyW1o 9 | 10 | ## Abstract 11 | 12 | Remove grpc_register_plugin from the public API. 13 | 14 | ## Background 15 | 16 | Mechanism is no longer needed. 17 | 18 | ### Related Proposals: 19 | 20 | None. 21 | 22 | ## Proposal 23 | 24 | Remove grpc_register_plugin. 25 | 26 | ## Rationale 27 | 28 | This mechanism has never been useful as a public API (it's only useful to call internal registration mechanisms - i.e. those without a public API). 29 | 30 | It blocks the deletion of grpc_init/grpc_shutdown, which will be coming soon. 31 | 32 | ## Implementation 33 | 34 | First roll the final usages directly into grpc_init/grpc_shutdown and then remove them from the public API. 35 | -------------------------------------------------------------------------------- /L102-cpp-version-macros.md: -------------------------------------------------------------------------------- 1 | L102: New gRPC C++ version macros 2 | ---- 3 | * Author(s): veblush 4 | * Approver: markdroth 5 | * Status: Approved 6 | * Implemented in: gRPC C++ 7 | * Last updated: Oct 26, 2022 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/X2VsZ1MlySg 9 | 10 | ## Abstract 11 | 12 | New public macros telling the version of gRPC C++ will be added to enable 13 | libraries and applications using gRPC C++ to behave differently depending 14 | on the gRPC C++ version at compile-time. 15 | 16 | ## Background 17 | 18 | It's a widely used code pattern to behave differenly based on the version 19 | information available at compile-time. Following is an example 20 | 21 | ``` 22 | #ifdef GRPC_CPP_VERSION_MAJOR 23 | # if GRPC_CPP_VERSION_MAJOR == 1 24 | # if GRPC_CPP_VERSION_MINOR >= 60 25 | // Use a new feature available from gRPC C++ 1.60 26 | # else 27 | // Do some workaround for gRPC C++ 1.59 or older 28 | # endif 29 | # else 30 | // New major version! 31 | # endif 32 | #else 33 | // Do some workaround for old gRPC C++ 34 | #endif 35 | ``` 36 | 37 | This has been asked by users (e.g [#25556](https://github.com/grpc/grpc/issues/25556)) and 38 | other Google OSS libraries (e.g. 39 | [Abseil](https://github.com/abseil/abseil-cpp/blob/8c0b94e793a66495e0b1f34a5eb26bd7dc672db0/absl/base/config.h#L88-L115), 40 | [Protobuf](https://github.com/protocolbuffers/protobuf/blob/0d0164feff22a4c9a3e884c60c2987ae87969957/src/google/protobuf/stubs/common.h#L82-L87), 41 | and [Cloud C++](https://github.com/googleapis/google-cloud-cpp/blob/d33e46f94b2dfa6bcad0f2addfbfb5eb4978f40a/google/cloud/internal/version_info.h#L18-L20)) 42 | already provide version macros so it makes sense that gRPC has similar ones. 43 | 44 | ## Proposal 45 | 46 | Following macros will be added to the `grpcpp.h` header file. 47 | 48 | - `GRPC_CPP_VERSION_MAJOR`: Major version part (e.g. 1) 49 | - `GRPC_CPP_VERSION_MINOR`: Minor version part (e.g. 46) 50 | - `GRPC_CPP_VERSION_PATCH`: Patch version part (e.g. 1) 51 | - `GRPC_CPP_VERSION_TAG`: Tag version part (e.g. empty or rc0) 52 | - `GRPC_CPP_VERSION_STRING`: Whole version string (1.46.1-rc0) 53 | 54 | This changed is going to be reflected via https://github.com/grpc/grpc/pull/31033. 55 | -------------------------------------------------------------------------------- /L103-core-move-insecure-creds-declaration.md: -------------------------------------------------------------------------------- 1 | L103: C-core: Move function declarations for each credential type from `grpc/grpc_security.h` to its own header file 2 | ---- 3 | * Author(s): [Cheng-Yu Chung (@ralphchung)](https://github.com/ralphchung) 4 | * Approver: [@markdroth](https://github.com/markdroth) 5 | * Status: Ready for Implementation 6 | * Implemented in: C Core 7 | * Last updated: 2022-11-16 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/6qvo-UVs-uI 9 | 10 | ## Abstract 11 | 12 | Move function declarations for each credential type from `grpc/grpc_security.h` to its own header file. 13 | 14 | ## Background 15 | 16 | Previously, the PR https://github.com/grpc/grpc/pull/25586 helps move us to an eventual future where each type of credentials is in its own build target, so applications can link in only the specific one(s) they need. 17 | 18 | However, the issue https://github.com/grpc/grpc/issues/31012 points out the fact that `grpc/grpc_security.h` contains functions that has nothing to do with secure credentials, which contradicts with the goal we would like to achieve in the PR above. 19 | 20 | ## Proposal 21 | 22 | Move function declarations for each credential type from `grpc/grpc_security.h` to its own header file. The following is the list of mapping. 23 | 24 | * google_default_credentials: grpc/channel_credentials/google_default.h 25 | * ssl_credentials: grpc/channel_credentials/ssl.h 26 | * alts_credentials: grpc/channel_credentials/alts.h 27 | * local_credentials: grpc/channel_credentials/local.h 28 | * tls_credentials: grpc/channel_credentials/tls.h 29 | * insecure_credentials: grpc/channel_credentials/insecure.h 30 | * xds_credentials: grpc/channel_credentials/xds.h 31 | 32 | ## Rationale 33 | 34 | Moving function declarations to `grpc/grpc.h` seems to be convenient but not right. Moving them to their own files makes more sense because our goal is to make every credential type have its own target. 35 | 36 | Note that we do not plan to reserve backward compatibility by including `grpc/channel_credentials/*.h` in `grpc/grpc_security.h` since we do not promise backward compatibility for C-core API. 37 | 38 | ## Implementation 39 | 40 | Move function declarations for each credential type from `grpc/grpc_security.h` to its own header file as indicated in the "Proposal" section. 41 | 42 | ## Open issues (if applicable) 43 | 44 | https://github.com/grpc/grpc/issues/31012 45 | -------------------------------------------------------------------------------- /L104-core-ban-recv-with-send-status.md: -------------------------------------------------------------------------------- 1 | L104: C-core: Ban GRPC_OP_SEND_STATUS_FROM_SERVER in combination with recv ops 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: markdroth 5 | * Status: In Review 6 | * Implemented in: C Core 7 | * Last updated: 2022/11/03 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/MAxqC_LI5O8 9 | 10 | ## Abstract 11 | 12 | Ban the combination of ops `GRPC_OP_SEND_STATUS_FROM_SERVER` and any of `GRPC_OP_RECV_MESSAGE`, `GRPC_OP_RECV_CLOSE_ON_SERVER`. 13 | 14 | 15 | ## Background 16 | 17 | Sending status from a server is a request for a full close of that call - neither reads not writes can proceed after it occurs. 18 | As such, it's unclear what behavior should occur when these operations are combined. 19 | No official API allows the creation of batches with these combinations - but several low level tests (below the canonical bindings) leverage the ability to combine these operations, presumably for brevity of test implementation. 20 | 21 | With the conversion to promise based APIs internally, with their formulation that the promise completes with send status and then no longer does anything, is at odds with the implementation quirks that are load bearing in this minority of tests. 22 | Two solutions present themselves: this proposal, or transparently breaking the batch in two (presumably in call.cc) and sending the non `GRPC_OP_SEND_STATUS_FROM_SERVER` ops first, followed by a second batch with just `GRPC_OP_SEND_STATUS_FROM_SERVER`, and transparently merging the completions before reporting them up. 23 | Though the latter change is more self contained, the former matches better with gRPC's overall semantics and can be performed as a one time test refactoring. 24 | 25 | 26 | ## Proposal 27 | 28 | If a batch passed to grpc_call_start_batch contains: 29 | - `GRPC_OP_SEND_STATUS_FROM_SERVER` 30 | - and `GRPC_OP_RECV_MESSAGE` _or_ `GRPC_OP_RECV_CLOSE_ON_SERVER` 31 | instead of processing it, return `GRPC_CALL_ERROR`. 32 | 33 | 34 | ## Implementation 35 | 36 | https://github.com/grpc/grpc/pull/31554 includes a change that tests for the objectionable condition. With the test in place we'll iterate on updating tests until all tests pass with this new restriction. 37 | -------------------------------------------------------------------------------- /L105-python-expose-new-error-types.md: -------------------------------------------------------------------------------- 1 | L105: Python Add New Error Types 2 | ---- 3 | * Author(s): XuanWang-Amos 4 | * Approver: gnossen 5 | * Status: In Review 6 | * Implemented in: Python 7 | * Last updated: 08/31/2023 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/pG34X9nAa3c 9 | 10 | ## Abstract 11 | 12 | * Add two new errors in grpc public API: 13 | * BaseError 14 | * AbortError 15 | * Also changed RpcError to be a subclass of BaseError. 16 | 17 | ## Background 18 | 19 | In case of abort, currently we don't log anything, exposing those error types allows user to catch and handle aborts if they want. 20 | 21 | ## Proposal 22 | 23 | 1. Add AbortError and BaseError in public API. 24 | 2. Change RpcError to be a subclass of BaseError. 25 | 26 | 27 | ## Rationale 28 | 29 | The Async API [has similar errors](https://github.com/grpc/grpc/blob/v1.57.x/src/python/grpcio/grpc/aio/__init__.py#L23,L24). We're refactoring code so those errors will also be used in Sync API. Adding them to the Sync API will help us keep the two stacks in sync and allow users of the Sync implementation to catch and handle aborts. 30 | 31 | We also plan to change RpcError to be a subclass of BaseError so that all grpc errors are a subclass of BaseError, this will allow users to catching all gRPC exceptions use code like this: 32 | 33 | ```Python 34 | try: 35 | do_grpc_stuff() 36 | except grpc.BaseError as e: 37 | # handle error 38 | ``` 39 | 40 | ## Implementation 41 | 42 | And and check AbortError while abort : https://github.com/grpc/grpc/pull/33969 43 | -------------------------------------------------------------------------------- /L106-node-heath-check-library.md: -------------------------------------------------------------------------------- 1 | Node Heath Check Library 2.0 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Status: In Review 6 | * Implemented in: Node.js 7 | * Last updated: 2023-08-25 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/6y632NLeUTw 9 | 10 | ## Abstract 11 | 12 | This document proposes a new, implementation-agnostic API for the Node [grpc-health-check](https://www.npmjs.com/package/grpc-health-check) package. 13 | 14 | ## Background 15 | 16 | The current grpc-health-check package is old (the latest release was in 2020), and its API and design negatively impacts usability in a few ways: 17 | - Most importantly, it directly depends on the old [grpc](https://www.npmjs.com/package/grpc) package, and inherets all of the platform compatibility problems that contributed to that package's deprecation in 2021. 18 | - The package has no TypeScript type information, which has become much more important in recent years. 19 | - The basic functionality of indicating the serving status of a service involves referring to a protobuf enum deep in a package hierarchy (e.g. `grpcHealthCheck.messages.HealthCheckResponse.ServingStatus.SERVING`) 20 | - The package exports a `Client` object that requires a specific gRPC implementation to be determined in advance, and it uses protoc-generated types, which are less commonly used and can be awkward to work with. 21 | 22 | ### Related Proposals: 23 | * [gRFC A17: Client-Side Health Checking](https://github.com/grpc/proposal/blob/master/A17-client-side-health-checking.md) 24 | 25 | ## Proposal 26 | 27 | We will publish version 2.0 of the grpc-health-check library with the following API, defined in TypeScript: 28 | 29 | ```ts 30 | type ServingStatus = 'UNKNOWN' | 'SERVING' | 'NOT_SERVING'; 31 | interface ServingStatusMap { 32 | [serviceName: string]: ServingStatus; 33 | } 34 | 35 | class HealthImplementation { 36 | constructor(statusMap: ServingStatusMap); 37 | 38 | /* Update the saved status for the listed service, and send the update to any 39 | * ongoing watch streams. */ 40 | setStatus(serviceName: string, status: ServingStatus); 41 | 42 | /* Serve the information in this object on the server using the 43 | * grpc.health.v1.Health service */ 44 | addToServer(server: Server); 45 | } 46 | 47 | // The path to the health.proto file provided in this package. 48 | const protoPath: string 49 | 50 | /* The service definition object for the grpc.health.v1.Health service, which 51 | * can be passed to makeGenericClientConstructor in either implementation to 52 | * create a Client for the Health service. Uses definitions generated by 53 | * @grpc/proto-loader. */ 54 | const service: ServiceDefinition; 55 | ``` 56 | 57 | ## Rationale 58 | 59 | ### Major version bump 60 | 61 | This change is an opportunity to make some long-needed improvements to this package, and a major version bump is unlikely to negatively affect any users. Relatively few people are currently using this library, because of the previously-mentioned platform compatibility problems with the old grpc package. In addition, by default, when installing a package, Node records the dependency with a version range restricted to the same major version, it is likely that anyone who is currently using the library would continue to use the current version after a new major version is released, unless they explicitly upgrade. On the other hand, anyone who is not currently using the library would need to take explicit action either way to start using it, so the change in major version would not make a difference to them. 62 | 63 | ### Switch from `protoc` to `@grpc/proto-loader` 64 | 65 | JavaScript generation in protoc for Node.js is no longer officially supported, and since this library was created, `@grpc/proto-loader`-based code generation has been added to `@grpc/grpc-js` and `@grpc/grpc-js-xds`, so for consistency, it would be better to use that in this library too. 66 | 67 | 68 | ## Implementation 69 | 70 | I (murgatroid99) will implement this as this design is reviewed. 71 | -------------------------------------------------------------------------------- /L107-node-noop-start.md: -------------------------------------------------------------------------------- 1 | Node: Make `Server#start` a no-op 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Status: In Review 6 | * Implemented in: Node.js 7 | * Last updated: 2023-10-02 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/zcGg6ipiZMo 9 | 10 | ## Abstract 11 | 12 | Remove the functionality of the `start` method in the `Server` class, and deprecate that method. 13 | 14 | ## Background 15 | 16 | Currently, a `Server` object effectively has a two-phase startup sequence: first, the user calls `bindAsync` to bind a port, then they call `start` to start serving. Specifically, `bindAsync` puts the server into a state where it allows clients to establish HTTP/2 sessions on the specified port, but then immediately closes those sessions. `start` causes the server to stop closing those sessions, and to instead serve requests on them. 17 | 18 | This behavior is problematic because allowing clients to establish HTTP/2 sessions signals to the client that the server is available, and clients will continue trying to connect to the same server instead of trying other servers. In addition, clients do not back off when reconnecting after a connection that was successfully established is closed, so this behavior will cause clients continuously reconnect with no backoff. 19 | 20 | ## Proposal 21 | 22 | The behavior of `bindAsync` will be modified, so that the server will begin handling requests immediately after the port is bound. `start` will become a no-op, except that it will throw an error in the same situations when it currently does, for compatibility with code that expects those errors. `start` will also be deprecated, which means that it will output a standard Node deprecation message once per process. 23 | 24 | An incidental consequence of this change is that the rule that `bindAsync` cannot be called after `start` will be removed. 25 | 26 | ## Rationale 27 | 28 | The state we currently provide between `bindAsync` and `start` has little practical use in a running server, but can cause significant behavior and performance degredation if reached by accident (e.g. by omitting the `start` call). So it would be best to avoid those mistakes entirely by not providing that option. This change is potentially breaking if anyone is deliberately using that state, but I think that is unlikely. Anyone who is currently calling `start` in the callback for `bindAsync` should see minimal behavioral difference with this change. 29 | 30 | The Node API for this is very simple, making it difficult to find a useful alternative behavior for `start`. The `Http2Server` class has a `listen` method, which binds and listens on a port, and automatically accepts incoming connections (and performs the TLS handshake, if applicable) and exchanges `SETTINGS` frames to establish the HTTP/2 session. Once that is complete, the gRPC gets access to the session object in a `session` event. At that point, the session has already been successfully established from the client's point of view, and the gRPC code can't undo that. The `Http2Server` class also has a `connection` event, which provides access to a `Socket` object representing the TCP socket. Closing the connection at that point results in the client seeing `ECONNRESET` instead of a `GOAWAY`, but the client still considers the connection to have been established, and does not behave differently as a result. 31 | 32 | 33 | ## Implementation 34 | 35 | I (murgatroid99) will implement this in the Node gRPC library. 36 | -------------------------------------------------------------------------------- /L109-node-server-unbind.md: -------------------------------------------------------------------------------- 1 | Node: Server API to unbind ports 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Status: In Review 6 | * Implemented in: Node.js 7 | * Last updated: 2023-10-18 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/0R8lD87XDo8 9 | 10 | ## Abstract 11 | 12 | Add a new method `unbind` to the `Server` class to unbind a port previously bound by `bindAsync`. 13 | 14 | ## Background 15 | 16 | [gRFC A36: xDS-Enabled Servers](https://github.com/grpc/proposal/blob/master/A36-xds-for-servers.md) specifies that ports must be in the "serving" or "not serving" state. In Node.js, we have decided that the best way to represent this is with the bound state of the port. Since an xDS-enabled server may receive a configuration update that causes it to transition from the "serving" to "not serving" state, it needs to be able to unbind a previously-bound port. 17 | 18 | ### Related Proposals: 19 | * [gRFC A36: xDS-Enabled Servers](https://github.com/grpc/proposal/blob/master/A36-xds-for-servers.md) 20 | 21 | ## Proposal 22 | 23 | A single method `unbind(port: string): void` will be added to the `Server` class. When called, if the `port` argument matches the port argument to a previous call to `bindAsync`, any ports bound by that `bindAsync` call will be unbound, and a GOAWAY will be sent to any active connections that were established using that port. Those connections will still be tracked for shutdown methods, meaning that `tryShutdown` will wait for them to finish, and `forceShutdown` will close them. 24 | 25 | If a previous call to `bindAsync` is still pending, the `unbind` call will cancel it if possible, or close the ports after they have been opened otherwise. If cancelled, the `bindAsync` callback will be called with an error referencing the cancellation. If `unbind` is called while `bindAsync` is pending, and then `bindAsync` is called again with the same `port` and identical `credentials` while the previous `bindAsync` call is still pending, the previous attempt will be un-cancelled if possible, and the callbacks for both `bindAsync` calls will be called when binding is complete. If instead `bindAsync` is called again with different credentials, it will start a new separate bind attempt. 26 | 27 | Ephemeral ports (port 0) are handled differently as a special case. A call to `unbind` with a port number of 0 will throw an error. After a call to `bindAsync` with a port number of 0 succeeds, and the specific bound port number is provided in the callback, `unbind` can be called with the original `port` string with the bound port number substituted for port 0 to unbind that port. 28 | 29 | ## Rationale 30 | 31 | ### Alternatives considered 32 | 33 | #### Use a different server object for each port 34 | 35 | As an alternative to unbinding ports in an existing server, the xDS server could use a separate server object under the hood for each port it wants to bind. The upside of this option is that it does not require any API changes. However, there are a few downsides: 36 | 37 | - The xDS server would need its own tracking for registered services, and possibly other things in the future such as server interceptors, in order to pass each of those things along to each underlying server it creates. 38 | - Channelz statistics would be fragmented between the different server objects. 39 | 40 | 41 | ## Implementation 42 | 43 | I (murgatroid99) will implement this initially as `_unbind`, which is considered non-public because of the underscore prefix. If/when this proposal is accepted, I will also add the non-underscore-prefixed method. 44 | -------------------------------------------------------------------------------- /L111-node-server-drain.md: -------------------------------------------------------------------------------- 1 | Node: Server API to drain connections on a port 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Status: In Review 6 | * Implemented in: Node.js 7 | * Last updated: 2023-11-09 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/-md8UlWAtKY 9 | 10 | ## Abstract 11 | 12 | Add a new method `drain` to the `Server` class to gracefully close all active connections on a specific port or all ports. 13 | 14 | ## Background 15 | 16 | [gRFC A36: xDS-Enabled Servers][A36] specifies that an update to a Listener resource causes all older connections on that Listener to be gracefully shut down with a grace period for long-lived RPCs. 17 | 18 | 19 | ### Related Proposals: 20 | * [gRFC A36: xDS-Enabled Servers][A36] 21 | * [gRFC L109: Node: Server API to unbind ports][L109] 22 | 23 | ## Proposal 24 | 25 | A single method `drain(port: string, graceTimeMs: number): void` will be added to the `Server` class. When called, all open connections associated with the specified `port` will be closed gracefully. After `graceTimeMs` milliseconds, all open streams on those connections will be cancelled. 26 | 27 | The `port` string will be handled the same as in [gRFC L109][L109]: the name will be normalized, and then used to reference the existing list of bound ports. Port 0 is handled specially: a call to `drain` with port 0 will throw an error. After a call to `bindAsync` with a port number of 0 succeeds, and the specific bound port number is provided in the callback, `drain` can be called with the original `port` string with the bound port number substituted for port 0 to drain connections on that port. 28 | 29 | ## Rationale 30 | 31 | ### Alternatives considered 32 | 33 | #### Make relevant internals `protected` 34 | 35 | When implementing [gRFC A36][A36], I intend to make the xDS server a subclass of the `Server` class. TypeScript supports using the `protected` keyword to make fields visible to subclasses. So, it would be possible to make the relevant internals `protected` to be able to implement this functionality directly in the xDS server class. However, doing so makes those fields visible to any user that wants to subclass `Server`, effectively making them part of the library's public API. That would put significant limits on how those fields could change in the future, which I am not currently willing to accept. 36 | 37 | #### Make the arguments optional 38 | 39 | The specific use case described in [gRFC A36][A36] only calls for draining a specific port, and the grace period is always applied, so the API defined here fully handles that use case. The arguments could be made optional 40 | 41 | ## Implementation 42 | 43 | I (murgatroid99) will implement this. 44 | 45 | [A36]: https://github.com/grpc/proposal/blob/master/A36-xds-for-servers.md 46 | [L109]: https://github.com/grpc/proposal/blob/master/L109-node-server-unbind.md 47 | -------------------------------------------------------------------------------- /L113-core-remove-num-external-connectivity-watchers.md: -------------------------------------------------------------------------------- 1 | L113: C-core: Remove `grpc_channel_num_external_connectivity_watchers()` 2 | ---- 3 | * Author(s): @markdroth 4 | * Approver: @ctiller 5 | * Status: Implemented 6 | * Implemented in: C-core 7 | * Last updated: 2024-02-07 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/CKzjGVVcVBE 9 | 10 | ## Abstract 11 | 12 | We will remove the `grpc_channel_num_external_connectivity_watchers()` 13 | function from the C-core API. 14 | 15 | ## Background 16 | 17 | This function is not really useful for wrapped languages. Currently, 18 | its only callers are in tests and do not appear to actually be 19 | necessary. 20 | 21 | In addition, we are in the process of making some implementation changes 22 | that would be simplified by not having to support this function. 23 | 24 | ## Proposal 25 | 26 | We will remove the `grpc_channel_num_external_connectivity_watchers()` 27 | function from the C-core API. 28 | 29 | ## Implementation 30 | 31 | Implemented in https://github.com/grpc/grpc/pull/35840. 32 | -------------------------------------------------------------------------------- /L114-node-server-connection-injection.md: -------------------------------------------------------------------------------- 1 | L114: Node Server Connection Injection 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Status: In Review 6 | * Implemented in: Node.js 7 | * Last updated: 2024-02-09 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/Uv3DJRRrvRA 9 | 10 | ## Abstract 11 | 12 | Add a new method `createConnectionInjector` to the `Server` class to allow existing TCP connections or TCP connection-like objects to be injected into the server. These connections would have the TLS handshake conducted according to the credentials provided in the call to that method, and then method handlers and interceptors for the server would apply as usual. 13 | 14 | ## Background 15 | 16 | As a part of the design [gRFC A29: xDS-Based Security for gRPC Clients and Servers][A29], a single server can apply different security configurations to different incoming connections on the same port depending on properties of those connections. This is not possible with the existing Node gRPC Server implementation, because the Node HTTP2 server listens on a port and automatically performs the TLS handshake for every incoming connection. 17 | 18 | In addition, this functionality has been requested in [grpc/grpc-node#2317](https://github.com/grpc/grpc-node/issues/2317). 19 | 20 | ### Related Proposals: 21 | * [A29: xDS-Based Security for gRPC Clients and Servers][A29] 22 | * [L111: Node: Server API to drain connections on a port][L111] 23 | * [L109: Node: Server API to unbind ports][L109] 24 | 25 | ## Proposal 26 | 27 | We will add a new method `createConnectionInjector(credentials: ServerCredentials): ConnectionInjector` to the `Server` class. The `ConnectionInjector` class has the following API: 28 | 29 | ```ts 30 | interface ConnectionInjector { 31 | injectConnection(connection: stream.Duplex): void; 32 | drain(graceTimeMs: number): void; 33 | destroy(): void; 34 | } 35 | ``` 36 | 37 | The `injectConnection` method accepts any duplex byte stream object, represented as the built in `stream.Duplex` class. The built in APIs represent TCP connections with the `net.Socket` class, which is a subclass of `stream.Duplex`. The server will perform the TLS handshake with the specified credentials and then handle the connection just like any other connection that comes in on a listening port. 38 | 39 | The `drain` method gracefully closes all open connections injected in to this `ConnectionInjector`, similar to the `Server#drain` method defined in [gRFC L111][L111]. 40 | 41 | The `destroy` method shuts down the `ConnectionInjector` and gracefully closes all open connections injected into it, similar to the `Server#unbind` method defined in [gRFC L109][L109]. 42 | 43 | ## Rationale 44 | 45 | ### `drain` method 46 | 47 | The xDS Server needs to be able to drain existing connections after receiving an update to the `Listener` resource. A `drain` method on the `ConnectionInjector` provides a simple way to do that and it matches an existing `Server` method. 48 | 49 | ### `destroy` method 50 | 51 | A connection injector does not own any listening TCP ports, so it generally does not represent resources that need to be released. However, [gRFC A29][A29] introduces new credentials types that are more resource-intensive, so it is useful to be able to release references to those. 52 | 53 | ### Alternatives considered 54 | 55 | #### Opaque handle usable in existing APIs 56 | 57 | An alternative design is for `createConnectionInjector` to return an opaque object (`Handle`) that can be passed as an argument to the `drain` and `unbind` methods, in addition to another new `Server` method `injectConnection(handle: Handle, connection: stream.Duplex)`. This is functionally equivalent to the proposed design, but I think it's cleaner to have that functionality in an object. 58 | 59 | ## Implementation 60 | 61 | I (murgatroid99) will implement this in parallel with the design review. 62 | 63 | [A29]: https://github.com/grpc/proposal/blob/master/A29-xds-tls-security.md 64 | [L111]: https://github.com/grpc/proposal/blob/master/L111-node-server-drain.md 65 | [L109]: https://github.com/grpc/proposal/blob/master/L109-node-server-unbind.md 66 | -------------------------------------------------------------------------------- /L115-core-refactor-generic-service-stub.md: -------------------------------------------------------------------------------- 1 | L115: C-Core Refactor generic service and generic stub 2 | ---- 3 | * Author(s): ysseung 4 | * Approver: ctiller 5 | * Status: Draft 6 | * Implemented in: C Core 7 | * Last updated: 2024/04/15 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/301w3zXYf8o 9 | 10 | ## Abstract 11 | 12 | We will refactor `async_generic_service.h` and `generic_stub.h` to move 13 | callback based interfaces into separate targets. Existing files will include 14 | the callback based interfaces. 15 | 16 | ## Background 17 | 18 | This enables services to only depend on the callback based interfaces of the 19 | generic stub and generic service. 20 | 21 | ### Related Proposals 22 | 23 | None. 24 | 25 | ## Proposal 26 | 27 | We will move `CallbackGenericService` in `async_generic_service.h` to a new file 28 | `callback_generic_service.h` and make it a new public target 29 | `:callback_generic_service`. `async_generic_service.h` will include 30 | `callback_generic_service.h` and existing dependencies will continue to 31 | include both. 32 | 33 | We will also move move callback based interfaces of `TemplatedGenericStub` in 34 | `generic_stub.h` to a new class `TemplatedGenericStubCallback` in 35 | `generic_stub_callback.h` and make it a new public target 36 | `:generic_stub_callback`. `CallbackGenericService` will inherit 37 | `CallbackGenericService` and existing dependencies will continue to 38 | include both. 39 | 40 | ## Implementation 41 | 42 | https://github.com/grpc/grpc/pull/36447 43 | -------------------------------------------------------------------------------- /L116-core-loosen-max-pings-without-data.md: -------------------------------------------------------------------------------- 1 | ## L116: C++-Core: Loosen behavior of `GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA` 2 | 3 | * Author: Yash Tibrewal (@yashykt) 4 | * Approver: Mark Roth (@markdroth), Craig Tiller (@ctiller) 5 | * Status: Draft 6 | * Implemented in: 7 | * Last updated: 2024-04-25 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/X_iApPGvwmo 9 | 10 | ## Abstract 11 | 12 | Modify the behavior of `GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA` to throttle pings 13 | to a frequency of 1 minute instead of completely blocking pings when too many 14 | pings have been sent without data/header frames being sent. 15 | 16 | ## Background 17 | 18 | [A8: Client-side Keepalive](A8-client-side-keepalive.md) and 19 | [A9: Server-side Connection Management](A9-server-side-conn-mgt.md) designed the 20 | use of HTTP2 pings as a keepalive mechanism in gRPC. In gRPC C++-Core, the 21 | following channel arguments implement keepalives. (Refer 22 | [Keepalive User Guide for gRPC Core](https://github.com/grpc/grpc/blob/master/doc/keepalive.md) 23 | for details.) - 24 | 25 | * `GRPC_ARG_KEEPALIVE_TIME_MS` - Period after which a keepalive ping is sent. 26 | * `GRPC_ARG_KEEPALIVE_TIMEOUT_MS` - Period after which sender of keepalive 27 | pings closes transport if it does not receive an acknowledgement of the 28 | ping. 29 | * `GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS` - Allows keepalive pings to be 30 | sent even if there are no calls in flight. 31 | * `GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS` - Minimum allowed 32 | time between a server receiving successive ping frames without sending 33 | data/header frames. 34 | * `GRPC_ARG_HTTP2_MAX_PING_STRIKES` - Number of bad pings server will tolerate 35 | before closing the connection. 36 | 37 | In addition to these channel arguments, the following channel arguments were 38 | introduced in gRPC C++-Core to play nicely with proxies that break connections 39 | that send too many pings. 40 | 41 | * `GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS` - Minimum time 42 | sender of a ping would wait between consecutive ping frames without 43 | receiving a data/header frame. 44 | * `GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA` - Maximum number of pings that can 45 | be sent without a data/header frame. 46 | 47 | These two additional channel arguments have historically caused a lot of pain 48 | and confusion among users of gRPC C++-Core and dependent languages when 49 | configuring keepalives. For example, long-lived streams with sparse 50 | communication get affected by these channel arguments and pings are blocked 51 | completely, defeating the purpose of keepalives. To help with this, 52 | [grpc/grpc#24063](https://github.com/grpc/grpc/pull/24063) deprecated 53 | `GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS`. 54 | 55 | ### Related Proposals: 56 | 57 | * [A8: Client-side Keepalive](A8-client-side-keepalive.md) 58 | * [A9: Server-side Connection Management](A9-server-side-conn-mgt.md) 59 | 60 | ## Proposal 61 | 62 | Modify the behavior of `GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA` to throttle pings 63 | to a frequency of 1 minute instead of completely blocking pings when too many 64 | pings have been sent without data/header frames being sent. 65 | 66 | The default setting of `GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA` is 2, and as 67 | stated earlier, long-lived streams with sparse communication get blocked by this 68 | channel arg if not explicitly disabled by setting it to 0. By loosening the 69 | behavior to throttle the pings instead, these use-cases would be less impacted. 70 | 71 | ### Temporary experiment protection 72 | 73 | The C++-Core experiment "max_pings_wo_data_throttle" will be used to guard this 74 | change in behavior. 75 | 76 | ## Rationale 77 | 78 | Ideally, we would be able to deprecate `GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA` 79 | similar to `GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS` and remove 80 | the pain completely. Unfortunately, this would break current users that go 81 | through proxies that limit how often a ping can be sent. 82 | 83 | If there's user interest and if the behavior of 84 | `GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA` is still considered useful, we could 85 | consider adding a channel argument that controls how long pings are throttled. 86 | 87 | ## Implementation 88 | 89 | Implemented in [grpc/grpc#36374](https://github.com/grpc/grpc/pull/36374). 90 | -------------------------------------------------------------------------------- /L118-core-remove-cronet.md: -------------------------------------------------------------------------------- 1 | Title 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: wenbozhu 5 | * Status: Draft 6 | * Implemented in: C++, Objective-C 7 | * Last updated: [Date] 8 | * Discussion at: (filled after thread exists) 9 | 10 | ## Abstract 11 | 12 | Remove the Cronet transport from gRPC core (and consequently the Objective-C bindings to it). 13 | 14 | ## Background 15 | 16 | gRPC C++, Objective-C can use Cronet on iOS as an alternative HTTP/2 stack. 17 | The Cronet library itself has been deprecated on iOS since September 2023, and so instead of maintaining this code we're opting to remove it. 18 | 19 | ## Proposal 20 | 21 | Remove the Cronet transport, and related APIs in core and Objective-C: 22 | - `grpc_cronet_secure_channel_create`, and the containing header `grpc_cronet.h` 23 | - `CronetChannelCredentials` in the C++ API 24 | - `GRPCCall+Cronet.{h,mm}`, `GRPCCronetChannelFactory.{h,mm}` and all API therein 25 | - `gGRPCCoreCronetID` 26 | 27 | ## Rationale 28 | 29 | The Cronet team no longer maintains this code, so any bugs are going to be up to the gRPC team to fix, and gRPC does not have the expertise to maintain this stack. 30 | In addition, gRPC C++ team is currently revamping our transport interfaces, and this transport will either need to be rewritten or removed - and rewriting atop a deprecated engine does not seem to be the right way forward. 31 | 32 | ## Implementation 33 | 34 | A single PR will be submitted to remove the bulk of the transport. Follow-ups may be submitted to garbage collect missed pieces later. 35 | -------------------------------------------------------------------------------- /L120-requiring-cpp17.md: -------------------------------------------------------------------------------- 1 | L120: Requiring C++17 in gRPC Core/C++ Library 2 | ---- 3 | * Author(s): veblush 4 | * Approver: markdroth 5 | * Status: Approved 6 | * Implemented in: n/a 7 | * Last updated: Dec 4, 2024 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/HXnIJJnMdgc 9 | 10 | ## Abstract 11 | 12 | To align with [the OSS Foundational C++ support policy](https://opensource.google/documentation/policies/cplusplus-support), gRPC is updating its minimum required C++ standard to C++17. 13 | 14 | ## Background 15 | 16 | To leverage the advancements in C++ standards, gRPC has progressively updated its requirements. 17 | Initially, it adopted C++11 in 2017 (as per [L6: Allow C++ in gRPC Core Library](L6-core-allow-cpp.md)). 18 | Then, in 2022, it transitioned to C++14 (as per [L98: Requiring C++14 in gRPC Core/C++ Library](L98-requiring-cpp14.md)). 19 | 20 | Now, to align with the [the OSS Foundational C++ support policy](https://opensource.google/documentation/policies/cplusplus-support) 21 | and stay consistent with its major dependencies (Abseil, BoringSSL, and Protobuf), gRPC is transitioning to require C++17. 22 | 23 | ## Proposal 24 | 25 | gRPC 1.69 will be the final release compatible with C++14. Going forward, gRPC will require C++17. This change will take effect with gRPC 1.70. 26 | 27 | gRPC 1.69 will continue to receive critical bug fixes (P0) and security updates for one year (until December 10, 2025). 28 | 29 | This update does not introduce API changes, so the major version of gRPC remains unchanged. 30 | -------------------------------------------------------------------------------- /L121-removing-core-cpp-public-hdrs: -------------------------------------------------------------------------------- 1 | L121: Deprecating and removing the `grpc++_public_hdrs` target 2 | ---- 3 | * Author(s): Adam Heller 4 | * Approver: Craig Tiller 5 | * Status: Draft 6 | * Implemented in: C++ 7 | * Last updated: 2025-01-07 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/YtWq0Oa_IyY 9 | 10 | ## Abstract 11 | 12 | The `grpc++_public_hdrs` target is not indended for public use, and it is being made private. It is redundant to the `grpc++` target, and all users should depend on `grpc++` instead. 13 | 14 | ## Background 15 | 16 | The gRPC Core/C++ public API headers are exposed from multiple bazel targets, and users should only need to depend on the complete grpc and grpc++ targets. 17 | The `*_public_hdrs` targets are sometimes useful for internal gRPC build configurations, and are still used to break up large monolithic targets and avoid circular dependencies. 18 | Nobody knows whether these redundant header-only targets were ever meant to be public when they were introduced many years ago, but it's generally thought to be a bad idea. 19 | Today, there is no good reason to continue to expose redundant header-only gRPC public API targets, and their existence is a liability and a nuisance. 20 | 21 | In the past few years, the gRPC maintainers have put effort into reclaiming ownership of internal code, which has improved our ability to make changes that should not affect our users. 22 | The long-term plan is to refactor the build configuration and ideally delete the public header targets entirely. 23 | 24 | ## Proposal 25 | 26 | The `grpc++_public_hdrs` target will be given private visibility. -------------------------------------------------------------------------------- /L122-core-remove-gpr_atm_no_barrier_clamped_add.md: -------------------------------------------------------------------------------- 1 | L122: Remove core function gpr_atm_no_barrier_clamped_add 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: markdroth 5 | * Status: Approved 6 | * Implemented in: C++ 7 | * Last updated: 2024/12/10 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/YnxuoZdRH-Y 9 | 10 | ## Abstract 11 | 12 | Remove gpr_atm_no_barrier_clamped_add. 13 | 14 | ## Background 15 | 16 | There are no uses of this function remaining in gRPC Core, the function is buggy as implemented and we don't want to fix it. 17 | 18 | ## Proposal 19 | 20 | Remove gpr_atm_no_barrier_clamped_add. 21 | 22 | ## Implementation 23 | 24 | https://github.com/grpc/grpc/pull/38263 25 | -------------------------------------------------------------------------------- /L17-cpp-sync-server-exceptions.md: -------------------------------------------------------------------------------- 1 | C++ synchronous server should catch exceptions from method handlers 2 | ---- 3 | * Author(s): vpai 4 | * Approver: a11r 5 | * Status: Approved 6 | * Implemented in: https://github.com/grpc/grpc/pull/13815 7 | * Last updated: January 11, 2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/u1XmQPAi3fA 9 | 10 | ## Abstract 11 | 12 | The C++ synchronous server should catch exceptions thrown by user method 13 | handlers and fail the RPC rather than crashing the server. 14 | 15 | ## Background 16 | 17 | The C++ sync server is the one place in the C++ API that calls out from the gRPC 18 | library to user code. Although it is a [Google standard not to throw 19 | exceptions](https://google.github.io/styleguide/cppguide.html#Exceptions), we 20 | cannot be sure that user code follows those guidelines. At the current time, the 21 | gRPC sync server will not catch any exceptions thrown in a method handler, and 22 | the process will terminate with an uncaught exception. 23 | 24 | ### Related Proposals: 25 | 26 | N/A 27 | 28 | ## Proposal 29 | 30 | The C++ sync server should wrap its invocation of user method handler functions 31 | in a `try/catch` block. 32 | - If the method handler throws any kind of exception, the sync server will 33 | treat it as though it returned an `UNKNOWN` `Status` and will fill in some 34 | error message. 35 | 36 | **NOTE**: An earlier version of this proposal suggested using the 37 | `what` result of a `std::exception` as the error message, but this 38 | should not be done since it might leak sensitive information from 39 | the server implementation. 40 | 41 | Additionally, this work will have the following constraints: 42 | 1. No `throw`s will be introduced into the gRPC C++ source code base 43 | 1. gRPC will continue to build and run correctly if the `-fno-exceptions` 44 | compiler flag is used to prevent the use of C++ exceptions. In that 45 | case, the pre-existing behavior of the library will be maintained. 46 | 1. A new portability build configuration will be added to guarantee 47 | that the library continues to build without the use of exceptions. 48 | 1. If there is no exception thrown by a method handler invocation, 49 | there will be no observable performance impact for common compiler 50 | and runtime configurations. 51 | 52 | ## Rationale 53 | 54 | Although we can push back and say that the service implementer should be 55 | responsible for making sure to not call functions that cause exceptions or catch 56 | any exceptions that their code may generate, this is unreliable and 57 | error-prone. The user may not realize that 20 levels down the abstraction stack, 58 | some function can trigger an exception. So, the method handler will end up 59 | generating an uncaught exception and terminating the server. 60 | 61 | Another alternative is that the user unsure about exception semantics of the 62 | method handler implementation can wrap every method handler in a `try/catch` 63 | block. However, this leads to excessive boilerplate code. 64 | 65 | In contrast to both of the existing options, `catch`ing in the library doesn't 66 | blow up the user's code and helps to maintain server robustness. Using an 67 | `UNKNOWN` status code is literally reasonable since gRPC by definition does not 68 | know the details of the user's method handler. Modern compiler and runtime 69 | implementations also do not take a performance hit from using `try` if there is 70 | no exception to catch. 71 | 72 | Users may want a different status code or error message choice on 73 | exceptions. Such users should create their own `try/catch` blocks in 74 | their method handlers and `return` the `Status` of their choice. 75 | 76 | ## Implementation 77 | 78 | Wrap all method handler invocations in a `try/catch` block that is itself 79 | protected by preprocessor macros to allow for compilation without exception 80 | support (e.g., with the use of `-fno-exceptions` in gcc or clang). 81 | 82 | ## Open issues (if applicable) 83 | 84 | N/A 85 | -------------------------------------------------------------------------------- /L18-core-remove-grpc-alarm.md: -------------------------------------------------------------------------------- 1 | Delete grpc_alarm from gRPC core API 2 | ---- 3 | * Author(s): vjpai 4 | * Approver: a11r 5 | * Status: Approved 6 | * Implemented in: https://github.com/grpc/grpc/pull/14015 7 | * Last updated: January 14, 2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/ikqxu_dzqQs 9 | 10 | ## Abstract 11 | 12 | Remove `struct grpc_alarm` and `grpc_alarm_*` functions from the core API 13 | 14 | ## Background 15 | 16 | `grpc::Alarm` [was introduced as a gRPC C++ API](https://github.com/grpc/grpc/pull/3618) to inject completion queue events at specified times because it was observed to be [difficult to manage timing-based code in the asynchronous API](https://github.com/grpc/grpc/pull/1949). This was implemented by creating a 17 | matching `grpc_alarm` in core which internally used the existing `grpc_timer` as its 18 | implementation; the `grpc::Alarm` in C++ was only a thin wrapping around `grpc_alarm`. 19 | 20 | ### Related Proposals: 21 | 22 | This is related to the overall project of [de-wrapping C++](https://github.com/grpc/grpc/projects/8). 23 | 24 | ## Proposal 25 | 26 | * Remove `grpc_alarm` and related functions from gRPC Core. 27 | * Re-implement `grpc::Alarm` by directly invoking gRPC Core sub-surface features such as `grpc_timer` 28 | 29 | ## Rationale 30 | 31 | `grpc::Alarm` has been used by external projects. However, `grpc_alarm` has not been used even by any wrapped language beside C++. Thus, it is not needed in core, and removing it from core allows additional flexibility in its C++ implementation. 32 | 33 | ## Implementation 34 | 35 | https://github.com/grpc/grpc/pull/14015 implements this change. 36 | 37 | ## Open issues (if applicable) 38 | 39 | N/A 40 | -------------------------------------------------------------------------------- /L21-core-gpr-review.md: -------------------------------------------------------------------------------- 1 | Privatize gpr_ headers that are not used by wrapped languages 2 | ---- 3 | * Author(s): vjpai 4 | * Approver: nicolasnoble 5 | * Status: Approved 6 | * Implemented in: https://github.com/grpc/grpc/pull/14184, https://github.com/grpc/grpc/pull/14190, https://github.com/grpc/grpc/pull/14196, https://github.com/grpc/grpc/pull/14197 7 | * Last updated: January 25, 2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/xdKVhaGhhAE 9 | 10 | ## Abstract 11 | 12 | Review the contents of `include/grpc/support` to see which should 13 | remain public (if used in wrapped languages or public API), which 14 | should be moved to `test` (if only used for tests), and which should 15 | be internalized in `src/core/lib/gpr` (if not needed for public-facing 16 | features and used only in the core and C++ API implementation). 17 | 18 | ## Background 19 | 20 | The `gpr` functions implement a variety of support features used in 21 | gRPC core, the C++ API implementation, wrapped languages, and 22 | tests. The publicly-exposed portion of the `gpr` API resides in 23 | `include/grpc/support` and includes many features that are not 24 | actually important to any language API (such as AVL trees). 25 | 26 | ### Related Proposals: 27 | 28 | N/A 29 | 30 | ## Proposal 31 | 32 | Curate the contents of `include/grpc/support` 33 | - Keep entries that are used in wrapped language implementations or 34 | any public API (e.g., `time.h`, `alloc.h`, ...) 35 | - Move entries to `test/core/util` if they are only used to support 36 | tests (e.g. `cmdline.h`, `subprocess.h`) 37 | - Move entries to `src/core/lib` if they are used only in the core 38 | and C++ API implementations but not used for any public-facing API 39 | (e.g., `avl.h`, `tls.h`). Files like `tls.h` that relate to portability 40 | concerns should be in `src/core/lib/gpr` while those like `avl.h` 41 | that are purely algorithmic should move elsewhere in `src/core/lib` 42 | and change the `gpr_` prefix in their public contents with the 43 | `grpc_` prefix. 44 | 45 | ## Rationale 46 | 47 | The objective of this is to reduce the number of public API surface 48 | touch points. This is particularly an option at this time as part of 49 | the de-wrapping of the C++ API implementation. 50 | 51 | ## Implementation 52 | 53 | This proposal will be implemented incrementally and opportunistically 54 | across several PRs. 55 | 56 | ## Open issues (if applicable) 57 | 58 | N/A 59 | 60 | -------------------------------------------------------------------------------- /L22-cpp-change-grpcpp-dir-name.md: -------------------------------------------------------------------------------- 1 | gRPC C++ Public Header Directory Change 2 | ---- 3 | * Author(s): muxi 4 | * Approver: nicolasnoble 5 | * Status: approved 6 | * Implemented in: C++ 7 | * Last updated: 01/25/2018 8 | * Discussion at: https://groups.google.com/d/msg/grpc-io/SDyc0hSWWG8/pX2n9BSNAQAJ 9 | 10 | ## Abstract 11 | This proposal is to change the name of directory `include/grpc++` to `include/grpcpp` for compatibility issues. 12 | 13 | ## Background 14 | gRPC C++ headers have been using the directory `include/grpc++` for a long time. Currently, users use `#include ` to include gRPC headers in their source code. Some gRPC internal code and public headers also use this style to include gRPC public headers. 15 | 16 | However, since the name `grpc++` has special character `+`, it causes compatibility issue when migrating to certain other platforms. 17 | 18 | One of such example is Xcode. Some gRPC users, such as Firestore, need to build gRPC C++ library on iOS as Apple framework. The problem emerges on this platform due to 3 restrictions: 19 | - gRPC public headers use `#include ` to include gRPC C++ headers; 20 | - An iOS app must include headers of a Framework with the format `#include `; 21 | - Apple framework's name must be C99 extended identifier compatible; `grpc++` is not compatible. 22 | 23 | The 3 restrictions make it impossible to make gRPC C++ library work as Apple framework. We believe this issue can happen somewhere else too in the future (e.g. [grpc#14089](https://github.com/grpc/grpc/pull/14089) is another example where this naming convention creates problem). 24 | 25 | ## Proposal 26 | 27 | The proposal is to migrate `include/grpc++` directory to `include/grpcpp` in a backwards compatible manner. The objective is that both styles of inclusion `#include ` and `#include ` can be used for all past and future gRPC C++ users. 28 | 29 | ### Changes to gRPC C++ API 30 | 31 | - Make `include/grpcpp` directory the new location for all gRPC C++ public headers; move all current headers from `include/grpc++` to `include/grpcpp`; update all corresponding inclusions inside gRPC code base; 32 | - Make wrapper headers in `include/grpc++` for all C++ public headers, preserving the directory structure. This makes current gRPC users build with old directory name. Add deprecation notice to wrapper headers as comments. 33 | - Update build systems to expose both headers and wrappers headers to users. 34 | 35 | ## Rationale 36 | We think this change is reasonable since, as mentioned in Background section, the old directory name may hurt again in the future. Since this is a rather big API change in C++, wrapper headers are created to make current users build. With this change in place: 37 | - Users who need to maintain backwards compatibility can keep using the old directory name; 38 | - New users should use the new directory name; 39 | - Users who need to use gRPC C++ as Apple framework must use the new directory name. 40 | 41 | ## Implementation 42 | Will be implemented as part of the [gRPC C++ cocoapods library](https://github.com/grpc/grpc/issues/13582) efforts. 43 | -------------------------------------------------------------------------------- /L24-cpp-extensible-api.md: -------------------------------------------------------------------------------- 1 | L24: C++ Extensible api 2 | ----------------------- 3 | * Author(s): makdharma 4 | * Approver: vjpai 5 | * Status: Draft 6 | * Implemented in: https://github.com/grpc/grpc/pull/14517 7 | * Last updated: Feb 26, 2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/LVpKsQ4CaFg 9 | 10 | ## Abstract 11 | 12 | This proposal will enable extending functionality of GRPC C++ API by removing 13 | the `final` keyword and making some class methods `virtual`. It will allow 14 | alternate implementations of the API by that subclass. 15 | 16 | ## Background 17 | 18 | Most GRPC C++ API classes are currently marked `final`. The rationale is that 19 | having virtual methods might have adverse memory size and performance impact. 20 | Additionally, adding `final` to public API classes later on would break existing 21 | implementations that use classes derived from public API. Hence it is better and 22 | safer to mark classes `final` unless there's a reason to make them extensible. 23 | 24 | ### Related Proposals: 25 | 26 | ## Proposal 27 | 28 | * Remove `final` keyword from all public API classes. 29 | * Move a subset of current private methods to protected. See the implementation 30 | PR for currently defined subset. 31 | * Add protected getters/setters for a subset of private member variables. See 32 | the implementation PR for currently defined getters/setters. 33 | * Mark core methods to `virtual`, so the functionality can be extended. 34 | * Each such change should go through thorough performance evaluation and should not 35 | be accepted if there is any observed performance degradation. 36 | * Do this work in stages. Begin with Server, ServerBuilder, and CompletionQueue 37 | classes, and extend to other classes as needed. 38 | 39 | 40 | ## Rationale 41 | 42 | The original rationale for keeping methods private and classes non-extensible 43 | has not held true. See https://github.com/grpc/grpc/pull/14359 for details of 44 | performance eval in the extreme case. Meanwhile the `final` classes and 45 | non-virtual methods preclude any experimentation with implementation. This 46 | proposal will allow extending and experimenting with different server, client, 47 | and support structure implementations. 48 | 49 | The performance impact of removing `final` from all public 50 | API classes was not measurable. Impact of adding `virtual` was noticable in 51 | case of Inproc transport microbenchmarks and only for 256K byte messages. 52 | However, this is a extreme case where virtual was added everywhere, which is not 53 | the intention behind this proposal. The more realistic case is seen in 54 | performance results of PR https://github.com/grpc/grpc/pull/14517, which shows 55 | the impact of both adding `virtual` and removing `final` for a few 56 | important classes is negligible. 57 | 58 | 59 | ## Implementation 60 | 61 | The proposed implementation is in PR https://github.com/grpc/grpc/pull/14517. 62 | It is limited in scope. It doesn't change every public class and method, but it 63 | gives enough extensibility to experiment with alternate implementations. 64 | 65 | ## Open issues (if applicable) 66 | -------------------------------------------------------------------------------- /L25-cpp-expose-buffer-reader-writer.md: -------------------------------------------------------------------------------- 1 | L25: [C++] Make GrpcProtoBuffer{Reader|Writer} Public 2 | ---- 3 | * Author(s): ncteisen 4 | * Approver: vjpai 5 | * Status: Draft 6 | * Implemented in: https://github.com/grpc/grpc/pull/14541 7 | * Last updated: 2018-03-01 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/3FTSA67iios 9 | 10 | ## Abstract 11 | 12 | We propose to move two classes, `GrpcProtoBufferWriter` and `GrpcProtoBufferReader`, to a public directory, so that users may create custom serialization traits without having to "reinvent the wheel". 13 | 14 | ## Background 15 | 16 | We have seen several user complaints that creating custom serializers is not an easy task. For example, TensorFlow [duplicated an internal file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/distributed_runtime/rpc/grpc_serialization_traits.h), which led to a [nasty bug](https://github.com/grpc/grpc/issues/10161). Another project had to use a hacky workaround to subtype the `GrpcProtoBuffer{Reader|Writer}` classes in order to implement a zero copy serializer. 17 | 18 | ### Related Proposals: 19 | * This relates to [L26](https://github.com/grpc/proposal/pull/63), in that it is making it easier to customize serialization code. 20 | 21 | ## Proposal 22 | 23 | We will rewrite `GrpcProtoBuffer{Reader|Writer}` in terms of the public class, `grpc::ByteBuffer`. Then we will move `GrpcProtoBuffer{Reader|Writer}` out of the internal namespace, and add header files in `include/grpcpp/support` so that the classes become accessible. 24 | 25 | ## Implementation 26 | 27 | Implementation is in [#14541](https://github.com/grpc/grpc/pull/14541) 28 | -------------------------------------------------------------------------------- /L26-cpp-raw-codegen-api.md: -------------------------------------------------------------------------------- 1 | L26: [C++] Add Raw API to C++ Server-Side Generated Code 2 | ---- 3 | * Author(s): ncteisen 4 | * Approver: vjpai 5 | * Status: Draft 6 | * Implemented in: https://github.com/grpc/grpc/pull/15771 7 | * Last updated: 2018-07-01 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/3FTSA67iios 9 | 10 | ## Abstract 11 | 12 | This document proposes adding a new API to the C++ generated code layer for servers. This API would designate that the selected methods be handled in a _raw_ manner, which means that their signatures will be written in terms of `::grpc::ByteBuffer` as opposed to API specific protobuf objects. 13 | 14 | ## Background 15 | 16 | Many teams have expressed interest in having a proto defined service that has some specialized methods that do custom serialization. The need for this use case has caused pain for users. Notably, TensorFlow was forced check in generated code in [this commit](https://github.com/tensorflow/tensorflow/commit/6ba4995be6372d1ca5e01eae649c1d8750b65857#diff-7d3228bae4cf4c12f11c17eb147fe5ba). 17 | 18 | ### Related Proposals: 19 | * This related to [L25](https://github.com/grpc/proposal/pull/61), in that it is making it easier to customize serialization code. 20 | 21 | ## Proposal 22 | 23 | We will add a new API to the generated server code that signals for a method to be written in terms of `::grpc::ByteBuffer`. Marking a method as `Raw` naturally means that it is asynchronous. 24 | 25 | Selecting which methods are `Raw` will be handled in the same manner as we currently allow for selection of Generic, Async, or Streamed methods. For example: 26 | 27 | ```C++ 28 | using SpecializedServer = 29 | grpc::TestService::WithRawMethod_Foo 30 | ``` 31 | 32 | This server would using protobuf to negotiate all methods _except_ Foo, which would be handled with ByteBuffers. This allows the application to use whatever serialization mechanism they want for Foo. 33 | 34 | Marking a method as `Raw` is type unsafe since gRPC library cannot ensure that the user is serializing and deserializing with the same protocol. This feature is meant for "power users" who are willing to enforce the serialization invariant in their code. 35 | 36 | Continuing with the example, if Foo were unary, then the server would interact with it like so: 37 | 38 | ```C++ 39 | // setup 40 | SpecializedServer* server = BuildServer(...) 41 | grpc::ServerCompletionQueue cq; 42 | grpc::ServerContext srv_ctx; 43 | 44 | // incoming 45 | grpc::ByteBuffer recv_request_buffer; 46 | grpc::GenericServerAsyncResponseWriter response_writer(&srv_ctx); 47 | service->RequestEcho(&srv_ctx, &recv_request_buffer, &response_writer, 48 | &cq, &cq, tag(1)); 49 | 50 | // outgoing 51 | grpc::ByteBuffer send_response_buffer = ProcessRequestBB(recv_request_buffer); 52 | response_writer.Finish(send_response_buffer, Status::OK, tag(2)); 53 | ``` 54 | 55 | All other arities follow the same pattern. 56 | 57 | ## Implementation 58 | 59 | The implementation will follow the same pattern as the generated code for async API, but will be written with ByteBuffer instead of proto objects. This implementation will cause the `final` qualifier to be removed from the currently generated `WithAsyncMethod_Foo` classes. 60 | 61 | Implementation is in [#15771](https://github.com/grpc/grpc/pull/15771) 62 | -------------------------------------------------------------------------------- /L29-cpp-opencensus-filter.md: -------------------------------------------------------------------------------- 1 | C++ API Changes for OpenCensus Integration 2 | ---- 3 | * Author(s): Vizerai 4 | * Approver: markdroth, vjpai 5 | * Status: In Review 6 | * Implemented in: C++ 7 | * Last updated: 2018-05-14 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/ft8UHY0EUz0 9 | 10 | ## Abstract 11 | 12 | The OpenCensus filter allows grpc to integrate and collect stats and tracing data with OpenCensus (https://github.com/census-instrumentation/opencensus-cpp). The OpenCensus filter uses a number of internal APIs (src/core/lib/slice/slice_internal.h, src/cpp/common/channel_filter.h, src/core/lib/gprpp/orphanable.h, src/core/lib/surface/call.h). Since these internal APIs are subject to change, in order to ease maintenance of the filter, we propose that the filter be moved to the grpc repository (src/cpp/ext/filters/census/). This will introduce an API change for grpc++ when OpenCensus is used, which will require the user to register the OpenCensus filter with grpc. This is in addition to any views and exporters that would need to be registered to use OpenCensus. 13 | 14 | ## Background 15 | 16 | OpenCensus tracing and stats functionality is a desired feature for grpc C++ users. Integrating OpenCensus into grpc core as a direct dependency is not viable due to OpenCensus dependencies which are not supported in grpc (abseil being a primary one). 17 | 18 | ### Related Proposals: 19 | N/A 20 | 21 | ## Proposal 22 | 23 | We propose that the OpenCensus filter (C++ grpc filter) which currently resides in the OpenCensus repository (https://github.com/census-instrumentation/opencensus-cpp) be moved to the grpc repository (src/cpp/ext/filters/census/). The OpenCensus filter will be setup as an optional build target which users can include. There no viable way to include it by default with the default grpc build due to dependency conflicts. Users will have to manually enable the filter by using a filter registration call. 24 | 25 | The major changes introduced by this are as follows: 26 | 1) A new optional dependency on OpenCensus for bazel builds. This will be a new build target in the Bazel build file. Currently only the Bazel build will be supported. There will be no git submodule for OpenCensus, and no make/cmake support (this may be added at a later time). 27 | 2) Two new public API calls will be introduced to enable the OpenCensus filter for tracing and stats collection and register default views. The registration call must be made to register the plugin mechanism and initialize OpenCensus. The call to register default views is optional and does not need to be used if the user will be registering their own views. 28 | 29 | * The public API calls will reside in include/grpcpp/opencensus.h : 30 | 31 | void RegisterOpenCensusPlugin(); 32 | void RegisterOpenCensusViewsForExport(); 33 | 34 | ## Rationale 35 | 36 | The rationale behind moving the filter code is primarily to ease maintenance of the filter as it depends on internal grpc APIs that are subject to change. The API calls into the OpenCensus library are unlikely to change in the future, so few if any API breaking changes are expected from the OpenCensus side. An initialization call is required, because there is no viable way to have it enabled by default. Building it directly with grpc is not possible as it introduces dependency conflicts. Dynamic initialization from linking through weak symbols is not available on all platforms (namely Windows). 37 | 38 | ## Implementation 39 | 40 | The migration of the code is relatively simple. The filter within OpenCensus (opencensus-cpp/opencensus/plugins/grpc/) will be migrated to grpc (src/cpp/ext/filters/census/). A new build target will be introduced for the filter which will allow users to optionally include OpenCensus. This target contains the registration function needed to initialize the OpenCensus filter. 41 | 42 | These changes are currently under review: https://github.com/grpc/grpc/pull/15070 43 | 44 | ## Open issues (if applicable) 45 | N/A 46 | -------------------------------------------------------------------------------- /L30-cpp-control-max-threads-in-SyncServer.md: -------------------------------------------------------------------------------- 1 | * Author(s): Sree Kuchibhotla (sreecha) 2 | * Approver: vjpai 3 | * Status: In Review 4 | * Implemented in: C++ 5 | * Last updated: June 6, 2018 6 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/YQQY0pGG9MI 7 | 8 | ## Abstract 9 | C++ `ThreadManager` is a specialized thread pool used to implement the C++ Synchronous server. 10 | 11 | Currently, `ThreadManager` can be configured to have minimum and maximum number of polling threads. While this helps to always have threads available to poll for work, it does nothing to throttle polling if the `ThreadManager` is overloaded with work. 12 | 13 | The proposal here is to add a notion of 'thread quota' to the `resource_quota` object. Currently, a resource quota object can be attached to the server (via `ServerBuilder::SetResourceQuota`). The idea here is to set a maximum number of threads on that resource quota object. 14 | 15 | Each `ThreadManager` object checks with the Server's resource quota before creating new threads. No new threads are created if the quota is exhausted. 16 | 17 | ## Details 18 | More concretely, the following API changes are being proposed 19 | 20 | ### 1. New *public* C++ API: `ResourceQuota::SetMaxThreads` 21 | 22 | ```C++ 23 | // File: include/grpcpp/resource_quota.h 24 | 25 | /// ResourceQuota represents a bound on memory and threads usage by the gRPC 26 | /// library. A ResourceQuota can be attached to a server (via \a ServerBuilder), 27 | /// or a client channel (via \a ChannelArguments). 28 | /// 29 | /// gRPC will attempt to keep memory and threads used by all attached entities 30 | /// below the ResourceQuota bound. 31 | class ResourceQuota final : private GrpcLibraryCodegen { 32 | public: 33 | ... 34 | /// Set the max number of threads that can be allocated from this 35 | /// ResourceQuota object. 36 | /// 37 | /// If the new_max_threads value is smaller than the current value, no new 38 | /// threads are allocated until the number of active threads fall below 39 | /// new_max_threads. There is no time bound on when this may happen i.e none 40 | /// of the current threads are forcefully destroyed and all threads run their 41 | /// normal course. 42 | ResourceQuota& SetMaxThreads(int new_max_threads); 43 | ... 44 | ``` 45 | * Max threads are set to 1500 by default. This was based on some tests I did on my machine (32G ram, 12 cores) in the past and found that ~1500 threads is the inflection point after which things escalate to thread-avalanche very quickly. 46 | * There are two choices on how to implement this: 47 | - (1) Have all thread managers create threads from a common pool (but potentially starving some thread managers and also making the quota-check a potential global contention point) 48 | - (2) Divide the max_threads equally among thread managers (with the downside that some thread managers are "over provisioned" while some might be "under provisioned"). 49 | 50 | I am going with option (1) for now and this may change in future. 51 | 52 | ### 2. New *public* Core-Surface API: `grpc_resource_quota_set_max_threads` 53 | 54 | ```C++ 55 | // File: include/grpc/grpc.h 56 | 57 | /** Update the size of the maximum number of threads allowed */ 58 | GRPCAPI void grpc_resource_quota_set_max_threads( 59 | grpc_resource_quota* resource_quota, int new_max_threads); 60 | 61 | ``` 62 | ### 3. New *Private* Core APIs: `grpc_resource_user_alloc_threads` and `grpc_resource_user_free_threads` 63 | 64 | This is a private API and may change. I am including this here just to give an idea of how I plan to implement this. 65 | ```C++ 66 | // File: src/core/lib/iomgr/resource_quota.h 67 | 68 | /* Attempts to get quota (from the resource_user) to create 'thd_count' number 69 | * of threads. Returns true if successful (i.e the caller is now free to create 70 | * 'thd_count' number of threads or false if quota is not available */ 71 | bool grpc_resource_user_alloc_threads(grpc_resource_user* resource_user, 72 | int thd_count); 73 | /* Releases 'thd_count' worth of quota back to the resource user. The quota 74 | * should have been previously obtained successfully by calling 75 | * grpc_resource_user_alloc_threads(). 76 | * 77 | * Note: There need not be an exact one-to-one correspondence between 78 | * grpc_resource_user_alloc_threads() and grpc_resource_user_free_threads() 79 | * calls. The only requirement is that the number of threads allocated should 80 | * all be eventually released */ 81 | void grpc_resource_user_free_threads(grpc_resource_user* resource_user, 82 | int thd_count); 83 | ``` 84 | ### Related Proposals: 85 | 86 | N/A 87 | 88 | ## Rationale 89 | Sometimes we might have to stop polling altogether if the server is overloaded with work. Currently there is no way to do that (since the minimum pollers setting always ensures some thread is polling for work). It was an oversight to not add max_threads as an option in the initial `ThreadManager` implementation 90 | 91 | This has been one of the most requested features 92 | 93 | ## Open issues (if applicable) 94 | -------------------------------------------------------------------------------- /L31-php-intercetor-api-change.md: -------------------------------------------------------------------------------- 1 | Title 2 | ---- 3 | * Author(s): ZhouyihaiDing 4 | * Approver: mehrdada 5 | * Status: Approved 6 | * Implemented in: PHP 7 | * Last updated: October 24, 2018 8 | * Discussion at: https://groups.google.com/forum/#!searchin/grpc-io/L31|sort:date/grpc-io/DsjvtmeJJPU/QF2-99-oCAAJ 9 | ## Abstract 10 | 11 | This proposal introduces an interceptor API change in the gRPC PHP. 12 | 13 | ____ 14 | ## Background 15 | 16 | gRPC PHP interceptor has ability to intercept each RPC by modifying `method`, `argument`, 17 | `metadata` and `options`. It is better if we can modify the `deserialize`. 18 | 19 | The first reason is that since the interceptor can manipulate the `method` before 20 | the RPC starts, `deserialize` function should also be updated to be able to couple 21 | with the `method` when the response receives. 22 | 23 | The second reason we do not need to hide the `deserialize` because hiding `channel` 24 | already satisfies the purpose of the interceptor. 25 | 26 | ### Related Proposals 27 | 28 | * [PHP Client Interceptors](https://github.com/grpc/proposal/blob/master/L15-PHP-Interceptors.md) 29 | 30 | ## Proposal 31 | 32 | Add `$deserialize` as the argument for the interceptor API. 33 | 34 | ## Implementaion 35 | The only change is that four methods inside the `Interceptor` take `$deserialize` as 36 | the argument. The new API looks like below: 37 | 38 | ```php 39 | class Interceptor{ 40 | /** 41 | * @param string $method The name of the method to call 42 | * @param mixed $argument The argument to the method 43 | * @param string $deserialize A function that deserializes the response 44 | * @param array $metadata A metadata map to send to the server(optional) 45 | * @param array $options An array of call_options (optional) 46 | * @param function $continuation Used to invoke the next interceptor. 47 | * 48 | * @return \Closure A function which can create a UnaryCall 49 | */ 50 | public function interceptUnaryUnary($method, $argument, $deserialize, array $metadata = [], array $options = [], $continuation){} 51 | 52 | public function interceptStreamUnary($method, $deserialize, array $metadata = [], array $options = [], $continuation){} 53 | 54 | public function interceptUnaryStream($method, $argument, $deserialize, array $metadata = [], array $options = [], $continuation){} 55 | 56 | public function interceptStreamStream($method, $deserialize, array $metadata = [], array $options = [], $continuation){} 57 | } 58 | ``` 59 | 60 | ## Implementation 61 | 62 | Implemented in [PHP: add deserialze as the argument for the interceptor][impl] 63 | 64 | [impl]: https://github.com/grpc/grpc/pull/15779 65 | -------------------------------------------------------------------------------- /L32-node-channel-API.md: -------------------------------------------------------------------------------- 1 | Node.js Channel API 2 | ---- 3 | * Author(s): mlumish 4 | * Approver: wenbozhu 5 | * Status: Draft 6 | * Implemented in: Node.js 7 | * Last updated: 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/VR0C7DiG7bs 9 | 10 | ## Abstract 11 | 12 | Expose the `Channel` class in the Node API, along with APIs for overriding the channel created in the `Client` constructor. 13 | 14 | ## Background 15 | 16 | In the past, some users have requested the ability to explicitly create channels and share them among clients. In addition, another library needs to be able to intercept some channel functionality. 17 | 18 | ## Proposal 19 | 20 | We will add to the API a `Channel` class and two `Client` construction options. 21 | 22 | ### New `Channel` Class API 23 | 24 | ```ts 25 | enum ConnectivityState { 26 | IDLE = 0, 27 | CONNECTING = 1, 28 | READY = 2, 29 | TRANSIENT_FAILURE = 3, 30 | SHUTDOWN = 4 31 | } 32 | 33 | class Channel { 34 | /** 35 | * This constructor API is almost identical to the Client constructor, 36 | * except that some of the options for the Client constructor are not valid 37 | * here. 38 | * @param target The address of the server to connect to 39 | * @param credentials Channel credentials to use when connecting 40 | * @param options A map of channel options that will be passed to the core 41 | */ 42 | constructor(target: string, credentials: ChannelCredentials, options: ([key:string]: string|number)); 43 | /** 44 | * Close the channel. This has the same functionality as the existing grpc.Client.prototype.close 45 | */ 46 | close(): void; 47 | /** 48 | * Return the target that this channel connects to 49 | */ 50 | getTarget(): string; 51 | /** 52 | * Get the channel's current connectivity state. This method is here mainly 53 | * because it is in the existing internal Channel class, and there isn't 54 | * another good place to put it. 55 | * @param tryToConnect If true, the channel will start connecting if it is 56 | * idle. Otherwise, idle channels will only start connecting when a 57 | * call starts. 58 | */ 59 | getConnectivityState(tryToConnect: boolean): ConnectivityState; 60 | /** 61 | * Watch for connectivity state changes. This is also here mainly because 62 | * it is in the existing external Channel class. 63 | * @param currentState The state to watch for transitions from. This should 64 | * always be populated by calling getConnectivityState immediately 65 | * before. 66 | * @param deadline A deadline for waiting for a state change 67 | * @param callback Called with no error when a state change, or with an 68 | * error if the deadline passes without a state change. 69 | */ 70 | watchConnectivityState(currentState: ConnectivityState, deadline: Date|number, callback: (error?: Error) => void); 71 | /** 72 | * Create a call object. Call is an opaque type that is used by the Client 73 | * class. This function is called by the gRPC library when starting a 74 | * request. Implementers should return an instance of Call that is returned 75 | * from calling createCall on an instance of the provided Channel class. 76 | * @param method The full method string to request. 77 | * @param deadline The call deadline 78 | * @param host A host string override for making the request 79 | * @param parentCall A server call to propagate some information from 80 | * @param propagateFlags A bitwise combination of elements of grpc.propagate 81 | * that indicates what information to propagate from parentCall. 82 | */ 83 | createCall(method: string, deadline: Date|number, host: string|null, parentCall: Call|null, propagateFlags: number|null): Call; 84 | } 85 | ``` 86 | 87 | ### New `Client` Construction Options 88 | 89 | We will add the following options to the `Client` constructor's options map: 90 | 91 | - `channelOverride`: a `Channel` instance. The `Client` will use this channel for communicating, and will ignore all other channel construction options. 92 | - `channelFactoryOverride`: A function that takes the same arguments as the `Channel` constructor and returns a `Channel` or an object that implements the `Channel` API. This uses the channel construction arguments passed to the client constructor. 93 | 94 | ## Rationale 95 | 96 | The proposed `Channel` API closely matches the existing internal `Channel` class, with the addition of the existing internal `Call` constructor as `createCall`, which is necessary if we want to allow `channelFactoryOverride` to return wrappers or other `Channel` API implementations. The `Channel` API is also very similar in the pure JavaScript implementation, so this minimizes the difficulty of porting this new API to that library. On the other hand, the internal `Call` API is very different in the two libraries, which is why the `Call` class should be opaque. 97 | 98 | 99 | ## Implementation 100 | 101 | I (@murgatroid99) will implement this in the Node.js library after this proposal is accepted. 102 | -------------------------------------------------------------------------------- /L33-node-checkServerIdentity-callback.md: -------------------------------------------------------------------------------- 1 | Exposing the checkServerIdentity callback in Node 2 | ---- 3 | * Author(s): Ian Haken 4 | * Approver: murgatroid99 5 | * Status: Draft 6 | * Implemented in: Node 7 | * Last updated: July 23, 2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/ucorPSDGIIo 9 | 10 | ## Abstract 11 | 12 | gRPC core now exposes a `verify_peer_callback` option for clients connecting to servers that allows the client to perform additional verification against the peer's presented certificate. This gRFC covers how this callback should be exposed in the Node API. 13 | 14 | ## Background 15 | 16 | The proposal is to base the callback on Node's [tls.checkServerIdentity](https://nodejs.org/api/tls.html#tls_tls_checkserveridentity_hostname_cert) callback. However, the certificate provided to this callback has been parsed into numerous fields whereas the `verify_peer_callback` only provides the raw PEM to the callback. This proposal covers how we should bridge this gap. 17 | 18 | It is further worth noting that we want to maintain feature-parity with the grpc-js-core implementation. So the API we choose should be easily interoperable with the callback available in the pure javascript implementation, will would take advantage of Node's built-in checkServerIdentity callback. 19 | 20 | ### Related Proposals: 21 | N/A 22 | 23 | ## Proposal 24 | 25 | This proposal suggests that all we expose in the Node gRPC API are the raw DER bytes of the certificate. This would be presented in the `raw` key of an object. To illustrate, this would look like: 26 | 27 | ``` 28 | grpc.credentials.createSsl( 29 | ca_store, 30 | client_key, 31 | client_cert, 32 | { 33 | "checkServerIdentity": function(host, cert) { 34 | /* 35 | cert = { 36 | raw: 37 | } 38 | */ 39 | } 40 | }); 41 | } 42 | ``` 43 | 44 | ## Rationale 45 | The lowest common denominator between the fully-parsed certificate made available in Node's callback and the certificate passed in to the callback by grpc-core is the raw certificate (in DER and PEM formats respectively). For this reason, we are suggesting only exposing the raw certificate and leaving it up to consumers of this callback to parse the certificate as desired. 46 | 47 | The avoids needing to parse out off the fields of a certificate and trying to match the full format exposed in Node's callback. However, by choosing to place the the raw DER bytes in a Buffer in the `raw` field, this matches Node's behavior with respect to this field and it thus leaves open the option of parsing additional fields to better match Node's implementation in the future. 48 | 49 | ## Implementation 50 | 51 | [Ian Haken](https://github.com/JackOfMostTrades) will be implementing this proposal. 52 | 53 | ## Open issues (if applicable) 54 | 55 | Developers utilizing this new `checkServerIdentity` callback may expect it to behave identically to Node's `checkServerIdentity` callback. I.e. they may expect to be able to apply certificate pinning by asserting `cert.fingerprint === '01:02...'`. The documentation will need to be clear that only the `raw` key is populated in the `cert` parameter of this callback. 56 | 57 | -------------------------------------------------------------------------------- /L34-cpp-opencensus-span-api.md: -------------------------------------------------------------------------------- 1 | OpenCensus C++ Span API changes 2 | ---- 3 | * Author(s): @g-easy 4 | * Approver: vjpai 5 | * Status: Draft 6 | * Implemented in: C++ 7 | * Last updated: 2018-07-17 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/ZpXeFinzl5A 9 | 10 | ## Abstract 11 | 12 | Add a public API for getting access to the OpenCensus Span for the current RPC. 13 | 14 | ## Background 15 | 16 | The [OpenCensus](https://opencensus.io/) 17 | [filter](https://github.com/grpc/grpc/tree/master/src/cpp/ext/filters/census) 18 | creates a Span per RPC, which can be accessed through the `ServerContext` 19 | object. 20 | 21 | Currently, the public API for the filter comes from: 22 | ``` 23 | #include 24 | ``` 25 | 26 | And `GetSpanFromServerContext()` comes from: 27 | ``` 28 | #include "src/cpp/ext/filters/census/grpc_plugin.h" 29 | ``` 30 | 31 | ### Related Proposals: 32 | 33 | [L29](https://github.com/grpc/proposal/blob/master/L29-opencensus-filter.md) was 34 | the initial gRFC for OpenCensus integration. 35 | 36 | ## Proposal 37 | 38 | [PR15984](https://github.com/grpc/grpc/pull/15984) proposes moving 39 | `GetSpanFromServerContext()` into `grpcpp/opencensus.h` 40 | 41 | ## Rationale 42 | 43 | Pros: 44 | * Only one file needs to be `#include`d for OpenCensus tracing, instead of two. 45 | 46 | Cons: 47 | * `grpcpp/opencensus.h` has to depend on `opencensus/trace/span.h` which, in 48 | turn, depends on `absl/strings/string_view.h` 49 | 50 | OpenCensus depends on Abseil, so at some point the consumer inevitably has to 51 | build it and link with it. 52 | 53 | ## Implementation 54 | 55 | See PR15984. 56 | 57 | ## Open issues (if applicable) 58 | 59 | N/A 60 | -------------------------------------------------------------------------------- /L35-node-getAuthContext.md: -------------------------------------------------------------------------------- 1 | Exposing the per-call authentication context data in Node 2 | ---- 3 | * Author(s): Nicolas Noble, murgatroid99 4 | * Approver: murgatroid99 5 | * Status: Draft 6 | * Implemented in: Node 7 | * Last updated: 2025-03-06 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/yVnvHDGxTME 9 | 10 | ## Abstract 11 | 12 | gRPC-node now exposes a `getAuthContext` method on the call objects that allows clients and servers to retreive the authentication context information. This gRFC covers how this new method should be exposed in the Node API. 13 | 14 | ## Background 15 | 16 | The proposal is to get the new `getAuthContext` method to return a Node's idiomatic object, hiding away the iterative structure of the core's auth_context. 17 | 18 | We want to maintain feature-parity with the pure javascript implementation of gRPC. While the spirit of the core's auth_context is to be opaque and extensive, and let it up to the developers to interpret the properties, we need to be able to emulate it in the pure javascript implementation, and guarantee feature parity. This means we need to whitelist fields we are reading from the native core, and transform them in a way that we know can work similarly between the two implementations. 19 | 20 | It is worth noting that a sort of equivalent API call in the Node's runtime is [tls.TLSSocket.getPeerCertificate](https://nodejs.org/api/tls.html#tls_tlssocket_getpeercertificate_detailed), which returns an actual SSL certificate. The only common field in this object we can guarantee to be identical between the two implementations is the `raw` line, that contains a Buffer with the binary representation of the peer certificate. 21 | 22 | Therefore, this proposal offers to only cover two fields at the beginning: 23 | - `transport_security_type`, transformed into a singleton string `transportSecurityType` 24 | - `x509_pem_cert`, transformed into the object: `sslPeerCertificate: { raw: certificateBuffer }` 25 | 26 | ### Related Proposals: 27 | N/A 28 | 29 | ## Proposal 30 | 31 | This proposal suggests that all we expose in the Node gRPC API are the raw DER bytes of the certificate for the `x509_pem_cert` field if present, and the singleton element `transport_security_type`. This would be presented in the `raw` key of an object. To illustrate, this would look like: 32 | 33 | ```js 34 | const authContext = call.getAuthContext() 35 | /* 36 | authContext = { 37 | transportSecurityType: 'ssl', 38 | sslPeerCertificate: { 39 | raw: 40 | } 41 | } 42 | */ 43 | ``` 44 | 45 | If the connection is not secure, neither field will be set. If the call is not associated with a connection at all, `getAuthContext` will return `null`. 46 | 47 | ### Expanded contents in `@grpc/grpc-js` 48 | 49 | In the `@grpc/grpc-js` library, the `sslPeerCertificate` field will contain the full contents of the `getPeerCertificate` result. 50 | 51 | ## Rationale 52 | The lowest common denominator between the fully-parsed certificate made available in Node's getPeerCertificate method and the certificate stored in the auth_context by grpc-core is the raw certificate (in DER and PEM formats respectively). For this reason, we are suggesting only exposing the raw certificate and leaving it up to consumers of this callback to parse the certificate as desired. 53 | 54 | The avoids needing to parse out off the fields of a certificate and trying to match the full format exposed in Node's getPeerCertificate method. However, by choosing to place the the raw DER bytes in a Buffer in the `raw` field, this matches Node's behavior with respect to this field and it thus leaves open the option of parsing additional fields to better match Node's implementation in the future. 55 | 56 | 57 | ### Expanded contents in `@grpc/grpc-js` 58 | 59 | The `grpc` library was deprecated in 2021. As a result, supporting the ability to migrate from `@grpc/grpc-js` to `grpc` is no longer a significant concern, so we can add a non-forward-compatible API. 60 | 61 | ## Implementation 62 | 63 | murgatroid99 will be implementing this proposal. 64 | 65 | ## Open issues (if applicable) 66 | 67 | Developers utilizing this new `getAuthContext` method may expect it to behave similar to Node's `getPeerCertificate` method. I.e. they may expect to be able to apply certificate pinning by asserting `cert.fingerprint === '01:02...'`. The documentation will need to be clear that only the `raw` key is populated in the `sslPeerCertificate` property. 68 | -------------------------------------------------------------------------------- /L38_graphics/class_diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L38_graphics/class_diagram.png -------------------------------------------------------------------------------- /L39-core-remove-grpc-use-signal.md: -------------------------------------------------------------------------------- 1 | Remove grpc_use_signal from core surface API 2 | ---- 3 | * Author(s): vjpai 4 | * Approver: AspirinSJL 5 | * Status: Proposed 6 | * Implemented in: https://github.com/grpc/grpc/pull/16706 7 | * Last updated: 2018-09-26 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/a1ScqicqKQk 9 | 10 | ## Abstract 11 | 12 | Remove `grpc_use_signal` from the core surface API 13 | 14 | ## Background 15 | 16 | The `epollsig` polling engine used signals to kick polling 17 | threads. This polling engine has been deprecated for a while and has 18 | been [deleted](https://github.com/grpc/grpc/pull/16679). The 19 | `grpc_use_signal` surface API allowed code to prevent the use of 20 | signals or to change the signal number used by `epollsig`. With the 21 | deletion of this polling engine, gRPC core no longer uses signals of 22 | any kind, and this API should be deleted. 23 | 24 | 25 | ### Related Proposals: 26 | 27 | N/A 28 | 29 | ## Proposal 30 | 31 | Delete the surface API and bump the core version number. 32 | 33 | ## Rationale 34 | 35 | Not only is this surface API no longer useful, I cannot find any 36 | example of it being used. 37 | 38 | ## Implementation 39 | 40 | Core: https://github.com/grpc/grpc/pull/16706 41 | 42 | ## Open issues (if applicable) 43 | 44 | N/A 45 | -------------------------------------------------------------------------------- /L40-node-call-invocation-transformer.md: -------------------------------------------------------------------------------- 1 | gRPC Node Call Invocation Transformer API 2 | ---- 3 | * Author(s): murgatroid99, WeiranFang 4 | * Approver: wenbozhu 5 | * Status: Draft 6 | * Implemented in: Node 7 | * Last updated: 26-09-2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/f3v4SBvj7L4 9 | 10 | ## Abstract 11 | 12 | Add a client API for transforming some parameters and intermediate variables for each call made using that client. 13 | 14 | ## Background 15 | 16 | Some advanced call interception use cases, such as content-based channel affinity, require functionality that is not provided by the existing `CallInterceptor` API. 17 | 18 | 19 | ### Related Proposals: 20 | * [L5-NODEJS-CLIENT_INTERCEPTORS](L5-NODEJS-CLIENT_INTERCEPTORS.md) 21 | 22 | ## Proposal 23 | 24 | Define the type `CallContext` as follows: 25 | 26 | ```ts 27 | interface CallContext { 28 | // The argument to the method. Only available for unary and client-streaming methods 29 | argument: any; 30 | // The metadata that will be sent to that data 31 | metadata: Metadata; 32 | // The call object that will be returned by the method 33 | call: ClientUnaryCall | ClientReadableStream | ClientWritableStream | ClientDuplexStream; 34 | // The channel object that will be used to transmit the request 35 | channel: Channel; 36 | // An object describing the request method 37 | methodDefinition: MethodDefinition; 38 | // The options object passed to the call 39 | callOptions: CallOptions; 40 | // The callback passed to the method. Only available for unary and client-streaming methods 41 | callback: requestCallback; 42 | } 43 | ``` 44 | 45 | Add a new option to the `Client` constructor `options` parameter called `callInvocationTransformer`, that accepts a function that takes a `CallContext` object as an input and returns another `CallContext` object as the output. This function can read and modify any of the values in the object, and the returned values will be used for processing the request, with the caveat that some modifications may cause the request to be processed improperly. 46 | 47 | ## Rationale 48 | 49 | The specific use case that prompted this proposal needs change the channel based on the unserialized argument, and to track response messages and call start and end associated with each channel. The only part of the code with access to all of those things is the beginning of each call invocation function in the client. So, for generality, this API provides access to a set of objects that consistently describe the inputs and state that determine how a call is invoked, including the ones that the mentioned use case needs. 50 | 51 | 52 | ## Implementation 53 | 54 | The implementation in the `grpc` library will be completed by @WeiranFang after this proposal is accepted. An initial implementation PR exists at grpc/grpc-node#557. 55 | 56 | The implementation in the `@grpc/grpc-js` library will be completed by @murgatroid99 after the client interceptors API is implemented in that library. 57 | -------------------------------------------------------------------------------- /L41-node-server-async-bind.md: -------------------------------------------------------------------------------- 1 | Asynchronous port binding method in Node Server API 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Status: Draft 6 | * Implemented in: Node 7 | * Last updated: 28-09-2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/Uio1Ja6GRHI 9 | 10 | ## Abstract 11 | 12 | Add a method to the Node gRPC `Server` class that binds a port as an asynchronous operation. 13 | 14 | ## Background 15 | 16 | In the initial development of the Node gRPC library, the `Server` class method to bind a port, called `bind`, was implemented as a synchronous operation because it was available as a synchronous operation in the underlying core library. In contrast, in the Node built in `net` module, the `Server` class's `listen` method, which performs a similar function, is asynchronous. In the new pure JavaScript Node gRPC library, the underlying network operations use the `net` module, so it is impossible to implement the `bind` method as a synchronous operation. 17 | 18 | 19 | ## Proposal 20 | 21 | We would add a method called `bindAsync` to the `Server` class that implements the same functionality as `bind`, but provides the result asynchronously. The specific function signature would be as follows: 22 | 23 | ```ts 24 | class Server { 25 | 26 | // ... 27 | 28 | /** 29 | * @param port The port that the server should bind on, in the format "address:port" 30 | * @param creds Server credentials object to be used for SSL. 31 | * @param callback Callback to be called asynchronously with the result of binding the port 32 | */ 33 | bindAsync(port: string, creds: ServerCredentials, callback: (error: Error, port: number)=>void): void; 34 | } 35 | ``` 36 | 37 | The semantics of the arguments are exactly the same as with the existing `bind` method, and the semantics of the port passed to the callback are identical to the semantics of the return value of the existing `bind` method: a negative number indicates failure, and a positive number indicates success binding to that port number. An error will also be passed to the callback if `port` is negative. 38 | 39 | ## Rationale 40 | 41 | The semantics of `bindAsync` would be identical to the semantics of `bind` except it is asynchronous, so the name communicates that. Keeping the semantics and API as similar as possible minimizes the work need to transition between the two. The semantically redundant `error` is passed to the callback for greater consistency with other Node APIs, so that it can be used with functions such as `util.promisify()`. We could also implement the asynchrony using promises, but a callback based API is more consistent with the rest of the existing API. In the future, promise support could be added, perhaps by returning a promise when no callback is provided. 42 | 43 | 44 | ## Implementation 45 | 46 | I (@murgatroid99) will implement this in the grpc library as soon as this proposal is accepted, and in the @grpc/grpc-js library as part of the server implementation. 47 | -------------------------------------------------------------------------------- /L43-node-type-info.md: -------------------------------------------------------------------------------- 1 | Node Message Type Information 2 | ---- 3 | * Author(s): mlumish 4 | * Approver: wenbozhu 5 | * Status: Draft 6 | * Implemented in: Node 7 | * Last updated: 2018-11-19 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/Vhz9AIMQ9AM 9 | 10 | ## Abstract 11 | 12 | This document proposes a generic format for representing message type information in package definition objects that are intended for use with gRPC, as well as a more specific format for representing Protobuf message and enum type information using that more general format. In addition, this document specifies how the `@grpc/proto-loader` library will expose this information. 13 | 14 | ## Background 15 | 16 | In the original `grpc` library, when loading `.proto` files using `grpc.load` or `grpc.loadObject`, services are represented by a custom type defined for gRPC that is not specific to the Protobuf implementation or even the Protobuf format in general. However, enums and messages are represented using the reflection types specific to Protobuf.js 5. This restricted our ability to change the underlying Protobuf.js. We solved that problem by simply omitting that type information from the output of `@grpc/proto-loader`, but users have since requested that that information be made available in issues and pull requests such as grpc/grpc-node#407 and grpc/grpc-node#448. In addition, the Google Cloud client libraries need some of this reflection information to implement certain features. 17 | 18 | ## Proposal 19 | 20 | ### Generic type object structure 21 | 22 | The generic object structure for representing message types will be as follows: 23 | 24 | ```ts 25 | { 26 | format: string; 27 | type: any; 28 | } 29 | ``` 30 | 31 | The `format` will be a string that describes the kind of message type information that is represented by the object. Examples of format strings that might be used include "JSON Schema" and "Protocol Buffer 3 DescriptorProto". The `type` will be an object or other value, depending on the format, that represents the specific message type. For example, if `format` is "JSON Schema", `type` might be a plain JavaScript object representation of the JSON Schema of the specific message type. These are the only two fields that will be assumed to have specific semantics across all possible formats. Some formats may define additional fields in the same object. 32 | 33 | ### Protobuf type object structure 34 | 35 | The object structure for representing Protobuf message types will be as follows: 36 | 37 | ```ts 38 | { 39 | format: "Protocol Buffer 3 DescriptorProto", 40 | type: DescriptorProtoObject, 41 | fileDescriptorProtos: Buffer[] 42 | } 43 | ``` 44 | 45 | A `DescriptorProtoObject` is a plain JavaScript object that contains the [canonical JSON Mapping](https://developers.google.com/protocol-buffers/docs/proto3#json) representation of a `DescriptorProto` message defined in the well known proto file `descriptor.proto`. The `fileDescriptorProtos` array contains a list of `.proto` files that is sufficient to fully define this message type, represented as serialized `FileDescriptorProto` messages defined in the same proto file `descriptor.proto`. The primary purpose of this field is to be used in implementing the gRPC reflection API. 46 | 47 | Protobuf enum types will be represented with a nearly identical structure, except with `EnumDescriptorProto` instead of `DescriptorProto`. 48 | 49 | ### `@grpc/proto-loader` type output 50 | 51 | The output of `@grpc/proto-loader` will include Protobuf type objects in two different places: 52 | 53 | - The top-level object will include every loaded message and enum type in this object format referenced by its corresponding fully qualified name. 54 | - Each `MethodDefinition` object will contain the keys `RequestType` and `ResponseType` that will map to references to the corresponding message type objects. 55 | 56 | ## Rationale 57 | 58 | gRPC itself does not depend on Protobuf or any other specific serialization format. So, for maximal generality, we need a representation that specifies the serialization format itself in data. Similarly, for generality across different Protobuf implementations, we need an implementation-independent standard representation. The `DescriptorProto` message format is an existing standard representation for Protobuf types, and the JSON representation is easy to use within JavaScript and is independent of the Protobuf implementation. 59 | 60 | The proposed `@grpc/proto-loader` output matches how the type information is output when using `grpc.load`, but it uses the new generic type representation instead of the type representation specific to Protobuf.js 5. 61 | 62 | 63 | ## Implementation 64 | 65 | I (murgatroid99) will implement this proposal in the `@grpc/proto-loader` library, using [Protobuf.js's descriptor proto compatibility extension library](https://github.com/dcodeIO/protobuf.js/tree/master/ext/descriptor) to transform the Protobuf information we are already loading into the generic `DescriptorProto` representation described above. 66 | 67 | Further in the future, this should also be implemented in the Node gRPC `protoc` plugin distributed in the `grpc-tools` package. 68 | -------------------------------------------------------------------------------- /L45-cpp-server-load-reporting.md: -------------------------------------------------------------------------------- 1 | C++ API changes for Server Load Reporting 2 | ---- 3 | * Author(s): AspirinSJL 4 | * Approver: markdroth, vjpai 5 | * Status: Draft 6 | * Implemented in: C++ 7 | * Last updated: 2018-07-16 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/6p8tL8UdG9s 9 | 10 | ## Abstract 11 | 12 | As part of the load balancing functionality, the server-side load reporting service is to be introduced. A `grpc::load_reporter::LoadReportingServiceServerBuilderOption` will be used to enable that service. A utility function `grpc::load_reporter::AddLoadReportingCost()` can be used to report the customized call metrics. 13 | 14 | ## Background 15 | 16 | A load balancer needs to get the load data of the servers in order to balance the load among the servers accordingly. One possible approach to get the load data is to request the load data from the servers via RPCs. That requires the servers to register an additional service to report the load data. 17 | 18 | 19 | ### Related Proposals: 20 | N/A 21 | 22 | ## Proposal 23 | 24 | We propose a new public header `include/grpcpp/ext/server_load_reporting.h` to use the server load reporting service. Note that this public header (and the load reporting service) is only available when the binary is built with the `grpcpp_server_load_reporting` library. That library is only available in Bazel build because it depends on OpenCensus which can only be built with Bazel. 25 | 26 | 1. The header contains a `grpc::load_reporter::LoadReportingServiceServerBuilderOption`. The user should set that option in the `grpc::ServerBuilder` to enable the load reporting service on a server. 27 | 2. The header also contains a utility function `void grpc::load_reporter::AddLoadReportingCost(grpc::ServerContext* ctx, const grpc::string& cost_name, double cost_value);`. Besides the canonical call metrics (e.g., the number of calls started, the total bytes received, etc), the user can use this function to add other customized call metrics from their own normal services. 28 | 29 | We also propose a cleanup change. 30 | 31 | - The previous public header `include/grpc/load_reporting.h` will be renamed to `include/grpc/server_load_reporting.h` to distinguish from the client load reporting in grpclb. 32 | 33 | ## Rationale 34 | 35 | 1. The load reporting service is opted in via a `grpc::ServerBuilderOption` instead of being automatically enabled because this service should only be enabled when there will be a load balancer requesting the load data. Otherwise, the load data will be accumulated for nothing. 36 | 2. The header under `include/grpc` is renamed to have a "server_" prefix because we actually have [client load reporting in grpclb](https://github.com/grpc/grpc/blob/85daf2db65d60ebd63936a936d69c63777123d10/src/core/ext/filters/client_channel/lb_policy/grpclb/client_load_reporting_filter.h). The new name will also be more consistent with `include/grpcpp/ext/server_load_reporting.h`. 37 | 38 | ## Implementation 39 | 40 | Implementation is completed. The new APIs are currently under `experimental` namespace. 41 | 42 | - The public APIs are in `include/grpcpp/ext/server_load_reporting.h`. 43 | - The protobuf definition is in `src/proto/grpc/lb/v1/load_reporter.proto`. 44 | - The implementation is under `src/cpp/server/load_reporter`. 45 | -------------------------------------------------------------------------------- /L48-node-metadata-options.md: -------------------------------------------------------------------------------- 1 | Node gRPC Metadata options 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Status: Draft 6 | * Implemented in: Node 7 | * Last updated: 2019-03-01 8 | * Discussion at: (filled after thread exists) 9 | 10 | ## Abstract 11 | 12 | Add options to the Metadata class corresponding to start-of-call flags in the core. 13 | 14 | ## Background 15 | 16 | The gRPC core library defines a few "metadata" flags that can be passed to `grpc_call_start_batch` at the beginning of a call in a "send initial metadata" `grpc_op`. Currently, there is no way for a user of the Node gRPC library to set those flags. 17 | 18 | ## Proposal 19 | 20 | Currently the `Metadata` constructor takes no arguments. We will modify it to take one argument which is an "options" object mapping string keys to boolean values. These options correspond to the core's [initial metadata flags](https://github.com/grpc/grpc/blob/a4b8667de96f8be14236a7312dca52c492a6d159/include/grpc/impl/codegen/grpc_types.h#L432) The following options will be accepted: 21 | 22 | - `idempotentRequest`: Signal that the call is idempotent. Default: `false` 23 | - `waitForReady`: Signal that the call should not return UNAVAILABLE before it has started. Default: `true` 24 | - `cacheableRequest`: Signal that the call is cacheable. GRPC is free to use GET verb. Default: `false` 25 | - `corked`: Signal that the initial metadata should be corked. Default: `false` 26 | 27 | We will also add a `setOptions` method to the `Metadata` class that accepts the same options object and replaces the previously set options. 28 | 29 | ## Rationale 30 | 31 | These flags need to be passed to the core in `grpc_call_start_batch` in a "send initial metadata" `grpc_op`, so they can be passed if and only if metadata is also passed. The simplest way to accomplish this is to make these flags part of the `Metadata` structure. It is useful to include a setter to allow users to easily use the same metadata with different calls that would want to use different flags. 32 | 33 | 34 | ## Implementation 35 | 36 | I (murgatroid99) will implement this in both `grpc` and `@grpc/grpc-js`. The initial implementation in `@grpc/grpc-js` will have no behavioral changes from the options. 37 | -------------------------------------------------------------------------------- /L50_graphics/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L50_graphics/overview.png -------------------------------------------------------------------------------- /L51-java-rm-nano-proto.md: -------------------------------------------------------------------------------- 1 | Java: Drop support for Protobuf's Javanano 2 | ---- 3 | * Author(s): [Eric Anderson](https://github.com/ejona86) 4 | * Approver: voidzcy 5 | * Status: Final 6 | * Implemented in: Java 7 | * Last updated: 2019-05-21 8 | * Discussion at: https://groups.google.com/d/topic/grpc-io/ErFnaXbIuZc/discussion 9 | 10 | ## Abstract 11 | 12 | Remove support for the 13 | [Javanano](https://search.maven.org/search?q=g:com.google.protobuf.nano) 14 | Protobuf API. 15 | 16 | ## Background 17 | 18 | Javanano ("nano" or "nano proto") was an Android-centric API that uses open 19 | structures to optimize dex method count and size. It was released in protobuf 20 | 3.0.0 alpha, and apparently a 3.1.0 (although this may have been a mistake, as 21 | it wasn't re-released later and it wasn't in the release notes). This nano API 22 | is very cumbersome and ugly. Javanano can't really be used by libraries, as 23 | there are a _lot_ of configuration options for the code generator and they 24 | often need to be tuned for the particular app. 25 | 26 | [Protobuf Lite](https://search.maven.org/search?q=g:com.google.protobuf%20a:protobuf-lite) 27 | is a subset of the full protobuf API also intended for Android. After tools 28 | like ProGuard, it can have a similar size to Javanano. There is also work that 29 | causes it to be better optimized for Android than nano. 30 | 31 | Upstream protobuf removed support for nano in 3.6, and [is encouraging users to 32 | use lite instead of 33 | nano](https://github.com/protocolbuffers/protobuf/issues/5288). Since nano was 34 | included in protoc and not as a separate plugin, this has been preventing 35 | grpc-java [from upgrading protoc in certain 36 | circumstances](https://github.com/grpc/grpc-java/pull/5320). This problem will 37 | get worse with time. 38 | 39 | ### Related Proposals: 40 | 41 | None. 42 | 43 | ## Proposal 44 | 45 | * grpc-java completely drops support for protobuf nano 46 | * grpc-protobuf-nano is deleted from the source and no longer released 47 | * protoc-gen-grpc-java drops support for the 'nano' flag 48 | * grpc-bom will remove its reference to grpc-protobuf-nano 49 | 50 | ## Rationale 51 | 52 | grpc-protobuf-nano depends only stable APIs. While it depends on 53 | MethodDescriptor.Marshaller, [Marshaller is to be stabilized in 54 | 1.21](https://github.com/grpc/grpc-java/pull/5617) with the same API it has had 55 | since 1.0. That means that even after it is no longer updated, the old releases 56 | will continue working. 57 | 58 | protoc-gen-grpc-java's generated code has always been forward-compatible; it 59 | does not rely on unstable APIs. So even after newer versions drop nano support, 60 | older releases will continue working. 61 | 62 | This means a simple deletion of the code in grpc-java allows grpc-java to move 63 | forward with newer versions of protobuf/protoc while letting existing users 64 | continue as they were, although they should strongly consider migrating to lite 65 | (although should also be aware that the [lite API is considered unstable 66 | API](https://github.com/protocolbuffers/protobuf/blob/v3.7.1/java/lite.md)). 67 | 68 | Given how clean this proposal is, no alternatives were seriously considered. 69 | 70 | ## Implementation 71 | 72 | The PR to implement this is already available as grpc/grpc-java#5622. 73 | 74 | ## Open issues (if applicable) 75 | 76 | None. 77 | -------------------------------------------------------------------------------- /L52-core-static-method-host.md: -------------------------------------------------------------------------------- 1 | Static strings for `method` and `host` for the input to `grpc_channel_register_call()` 2 | ---- 3 | * Author(s): [Soheil Hassas Yeganeh](https://github.com/soheilhy) 4 | * Approver: vjpai, yashykt 5 | * Status: Approved 6 | * Implemented in: core 7 | * Last updated: 2019-06-05 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/iYwUVQKU-OU 9 | 10 | ## Abstract 11 | 12 | This proposal proposes a requirement to only accept static strings (i.e., 13 | kept alive while gRPC is running) for `method` and `host` as the arguments to 14 | `grpc_channel_register_call()`. 15 | 16 | This change requires a major bump of gRPC version. 17 | 18 | ## Background 19 | 20 | For pre-registering, `method` and `host` are pre-registered on channels using 21 | `grpc_channel_register_call()`. Although no user that we are aware of uses 22 | non-static strings for `method` and `host`, we have to conservatively intern the 23 | strings since `grpc_channel_register_call()` is not clear about the ownership 24 | and life-time of `method` and `host`. Interning `method` and `host` results in 25 | unnecessary ref-counting and cache-line contention for every single call. This 26 | can be eliminated by making them static. 27 | 28 | ## Proposal 29 | 30 | Require static string for the `method` and `host` arguments to 31 | `grpc_channel_register_call()`. That is the `method` and `host` strings must 32 | be kept valid while the server is up. 33 | 34 | ## Rationale 35 | 36 | `method` and `host` are reffed for every call. Doing an atomic fetch-add per 37 | call is quite expensive, and truly unnecessary since the strings can easily be 38 | kept alive while the server is up. 39 | 40 | ## Implementation 41 | 42 | This is basically a simple comment update and removing `grpc_slice_intern()` 43 | calls for `method` and `host`, as implemented in 44 | [PR #19263](https://github.com/grpc/grpc/pull/19263). 45 | -------------------------------------------------------------------------------- /L55-objc-global-interceptor.md: -------------------------------------------------------------------------------- 1 | Title 2 | ---- 3 | * Author(s): muxi 4 | * Approver: psrini 5 | * Status: Approved 6 | * Implemented in: Objective-C 7 | * Last updated: 2019-07-10 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/tRKcPcfQKi4 9 | 10 | ## Abstract 11 | 12 | This proposal is about global interceptor for gRPC Objective-C library. 13 | 14 | ## Background 15 | Global interceptor refers to interceptors that run across all the calls in the same process, despite whether it is explicitly added to a particular call in its call options. It is a powerful feature for use cases such as global logging. The feature has been implemented in gRPC libraries of other languages. 16 | 17 | 18 | ### Related Proposals: 19 | * L50 - gRPC Objective-C Interceptor 20 | 21 | ## Proposal 22 | 23 | ### Global interceptor interface 24 | 25 | The proposed gRPC Objective-C library's global interceptor interface is as follows: 26 | 27 | ```objectivec 28 | @interface GRPCCall2 (Interceptor) 29 | 30 | /** 31 | * Register a global interceptor's factory in the current process. Only one interceptor can be 32 | * registered in a process. If another one attempts to be registered, an exception will be raised. 33 | */ 34 | + (void)registerGlobalInterceptor:(nonnull id)interceptorFactory; 35 | 36 | /** 37 | * Get the global interceptor's factory. 38 | */ 39 | + (nullable id)globalInterceptorFactory; 40 | 41 | @end 42 | ``` 43 | 44 | Users register a global interceptor with the `registerGloablInterceptor` method. Each process has a single global interceptor entry. The global interceptor may only be registered for once. Registering a second global interceptor will trigger an exception. 45 | 46 | ### Interceptor chain with global interceptor 47 | 48 | Once a global interceptor is registered, the global interceptor will be inserted at the very end of the interceptor chain of all calls in the same process. 49 | 50 |

51 | 52 |

53 | 54 | ## Rationale 55 | 56 | Global interceptor is a powerful feature that resulted in abuse by some users in other gRPC libraries in the past. As a result most gRPC languages have imposed the one-global-interceptor policy. The policy helps identify the source of abusive use of global interceptor. gRPC Objective-C library follows the same rule. Making the global interceptor the last one in a chain also follows the general practice from gRPC library of other languages. 57 | 58 | ## Implementation 59 | 60 | Implementation will be completed by mxyan@. 61 | -------------------------------------------------------------------------------- /L55_graphics/global-interceptor-chain.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L55_graphics/global-interceptor-chain.png -------------------------------------------------------------------------------- /L56_graphics/dependency.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L56_graphics/dependency.png -------------------------------------------------------------------------------- /L56_graphics/hierarchy1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L56_graphics/hierarchy1.png -------------------------------------------------------------------------------- /L56_graphics/hierarchy2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L56_graphics/hierarchy2.png -------------------------------------------------------------------------------- /L6-core-allow-cpp.md: -------------------------------------------------------------------------------- 1 | Allow C++ in gRPC Core Library 2 | ---- 3 | * Author(s): ctiller nicolasnoble 4 | * Approver: a11r 5 | * Status: Approved 6 | * Implemented in: n/a 7 | * Last updated: April 1, 2017 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/yAg-ydC77aE 9 | 10 | ## Abstract 11 | 12 | Allow C++ to be used in the gRPC Core library. 13 | 14 | ## Background 15 | 16 | The gRPC Core library is currently implemented in C99, with a C89 public 17 | interface. This gRFC proposes allowing C++ to be used within the core library, 18 | but continues to restrict the public interface to C89. 19 | 20 | ### Related Proposals: 21 | 22 | N/A 23 | 24 | ## Proposal 25 | 26 | Allow C++11 usage in gRPC Core, with the following caveats: 27 | - new/delete will be outlawed (in favor of wrappers that call into gpr_malloc, 28 | gpr_free - eg grpc_core::MakeUnique<>, grpc_core::Delete, grpc_core::New) 29 | - exceptions and rtti will be disallowed 30 | - standard library usage will be disallowed in favor of grpc core library 31 | substitutes 32 | - all code will be required to live under a grpc_core namespace 33 | - all resulting object code must be linkable with a normal C linker (no 34 | libstdc++ dependencies allowed) 35 | - public API must continue to be C89 36 | 37 | Provide a C++ utility library (much like GPR today) to assist implementation: 38 | - grpc_core::UniquePtr<> (as a typedef for std::unique_ptr) 39 | - grpc_core::Atomic<> (as a typedef for std::atomic) 40 | - grpc_core::IntrusiveSharedPtr<> 41 | - grpc_core::Vector<> 42 | - grpc_core::IntrusiveList<> 43 | - grpc_core::HashMap<> 44 | - grpc_core::AVL<> 45 | - grpc_core::Slice 46 | - grpc_core::Closure 47 | - grpc_core::ExecCtx 48 | - grpc_core::Combiner 49 | - grpc_core::Mutex 50 | 51 | Where possible, typedef equivalent types in the C++ stdlib (this would only be 52 | possible for header-only types). 53 | 54 | ## Rationale 55 | 56 | Writing in C++ gives us several advantages: 57 | - Safer memory management utilizing templated containers, smart pointers 58 | - Simplify code by leveraging virtual functions and inheritance (the library 59 | contains many LOC that serve to emulate virtual functions and inheritance) 60 | - Easier contribution (experience has shown it’s easier to attract C++ than C 61 | developers - and though we’ll be missing standard libraries, equivalent concepts 62 | should prove easier to find) 63 | - C++ Performance: make it easier to drag lower level types into the C++ wrapper 64 | library, and reduce API friction 65 | - Increase velocity by simplifying our idioms 66 | 67 | ## Implementation 68 | 69 | 1. Allow .cc files to be included in builds 70 | The “language” tag only becomes a linking hint for the build systems, and the C 71 | core maintains the status of a C library. 72 | 2. Convert lame_client.c to be C++ as a canary 73 | 3. Pause for one release cycle to validate assumptions that this is all safe 74 | 4. Start converting src/core/ext/client_channel/... to C++, as this library would 75 | gain the most from being implementable in C++ (especially lb, resolver 76 | interfaces) 77 | 5. On-demand during (4), implement needed C++ foundational libraries 78 | 6. Allow broader use of C++ within the library 79 | 80 | ## Open issues (if applicable) 81 | 82 | - Build complexity: our wrapped languages will need to be able to handle compiling 83 | C++ in their build chains (though this is likely to need to happen for BoringSSL 84 | in the future also) 85 | - Most build systems just use the extension of the file to determine the 86 | compilation rule to apply. 87 | - Build time will increase. 88 | - Platform reach may decrease, but we feel this will not be significant. 89 | 90 | -------------------------------------------------------------------------------- /L60-core-remove-custom-allocator.md: -------------------------------------------------------------------------------- 1 | Remove custom allocation function overrides from core surface API 2 | ---- 3 | * Author(s): veblush 4 | * Approver: mark 5 | * Status: Approved 6 | * Implemented in: https://github.com/grpc/grpc/pull/20462 7 | * Last updated: 2019-10-09 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/kDswzg-cEZU 9 | 10 | ## Abstract 11 | 12 | Remove `gpr_get_allocation_functions` and `gpr_set_allocation_functions` 13 | from the core surface API. 14 | 15 | ## Background 16 | 17 | Since gRPC Core was allowed to use the C++ standard library, it's getting 18 | harder to keep using the gRPC allocator which can be overriden by users. 19 | Previously, it was fairly straightforward to use a family of gpr memory 20 | allocation functions such as `gpr_malloc` and `gpr_free` because most of 21 | the code was written in C language. 22 | Once C++ started being used heavily, however, using it makes code harder to 23 | read and sometimes impossible to do it. 24 | Instead of maintaining these functions, this proposes to remove them. 25 | 26 | ### Related Proposals: 27 | 28 | N/A 29 | 30 | ## Proposal 31 | 32 | Following functions managing custom memory allocator will be removed. 33 | 34 | - `gpr_get_allocation_functions` 35 | - `gpr_set_allocation_functions` 36 | 37 | gRPC memory allocation functions such as `gpr_malloc` and `gpr_free` will 38 | remain with this change because they have been used when data allocated in 39 | user applications is passed into gRPC Core such as `metadata`. 40 | 41 | ## Rationale 42 | 43 | C++ provides a way to use a custom memory allocator but it usually requires to 44 | enter more code and makes it harder to read. 45 | 46 | This is an example of how this looks like with an allocator. 47 | 48 | ``` 49 | // New allocates memory with custom allocator and calls constructor of T. 50 | auto a = New(obj); 51 | 52 | // Delete calls destructor of T and frees the memory associated with it. 53 | Delete(a) 54 | 55 | // All container classes should be instantiated with a custom allocator. 56 | std::map>> m; 57 | 58 | // unique_ptr should carry special Delete function to use an allocator. 59 | grpc_core::UniquePtr a = grpc_core::MakeUnique("a"); 60 | ``` 61 | 62 | This can be simplified by not supporting a custom allocator. Note that the 63 | standard way of using`delete` doesn't mandate to specify the type of instance 64 | unlike `gprc_core::Delete`. 65 | 66 | ``` 67 | // plain new 68 | auto a = new SomeClass(obj); 69 | 70 | // plain delete 71 | delete a; 72 | 73 | // plain map 74 | std::map 75 | 76 | // plain unique_ptr 77 | std::unique_ptr p = std::make_unique("a"); 78 | ``` 79 | 80 | In addition to this, some of C++ libraries cannot use custom allocators. 81 | For example, `std::function` doesn't support an allocator. Moreover, 82 | none of gRPC wrapped libraries including gRPC C++ doesn't support this. 83 | As a result, memory allocation can be done either in a built-in allocator 84 | or a custom allocator. This can be misleading to developers. 85 | 86 | ## Implementation 87 | 88 | Core: https://github.com/grpc/grpc/pull/20462 89 | 90 | ## Open issues (if applicable) 91 | 92 | N/A 93 | -------------------------------------------------------------------------------- /L63_graphics/c_call_creds_hierarchy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L63_graphics/c_call_creds_hierarchy.png -------------------------------------------------------------------------------- /L63_graphics/call_creds_class_hierarchy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L63_graphics/call_creds_class_hierarchy.png -------------------------------------------------------------------------------- /L63_graphics/plugin_creds_codeflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L63_graphics/plugin_creds_codeflow.png -------------------------------------------------------------------------------- /L66-core-cancellation-status.md: -------------------------------------------------------------------------------- 1 | Core server RPCs will not report cancellation if completed with non-OK status 2 | ---- 3 | * Author(s): vjpai 4 | * Approver: markdroth 5 | * Status: Approved 6 | * Implemented in: https://github.com/grpc/grpc/pull/22991 7 | * Last updated: June 9, 2020 8 | * Discussion at https://groups.google.com/g/grpc-io/c/5o3EDPqz9is 9 | 10 | ## Abstract 11 | 12 | Clarify that gRPC core will not mark server RPCs cancelled if they explicitly close with `GRPC_OP_SEND_STATUS_FROM_SERVER` using a non-`OK` status. This is technically a core API change and thus needs a major revision increment. 13 | 14 | ## Background 15 | 16 | gRPC core versions up to 10.X specify that the out argument of the `GRPC_OP_RECV_CLOSE_ON_SERVER` operation is a pointer to an int that specifies whether the RPC was cancelled. The definition specified in the comment is 17 | 18 | ``` 19 | /** out argument, set to 1 if the call failed in any way (seen as a 20 | cancellation on the server), or 0 if the call succeeded */ 21 | ``` 22 | 23 | The phrase "in any way" is not clear, and the broadest (and current) definition is that it accounts for failures caused by issues such as client-side or server-side cancellations, deadlines being exceeded, connections exceeding their maximum age, network resets, *and explicit sending of non-OK status*. 24 | 25 | 26 | ### Related Proposals 27 | 28 | * The [C++ callback API](https://github.com/grpc/proposal/pull/180) directly exposes an `OnCancel` method that makes the impact of this API decision very visible. 29 | 30 | ## Proposal 31 | 32 | Clarify that RPCs will only be marked cancelled if they failed for a reason other than completion with an explicit non-OK status provided using the `GRPC_OP_SEND_STATUS_FROM_SERVER` operation. In practice, this means that RPCs to be considered cancelled are those for which the server was not able to successfully send any kind of status to the client (because the RPC was explicitly cancelled, the deadline was exceeded, because some HTTP configuration parameter like maximum connection age took effect, etc.). 33 | 34 | ## Rationale 35 | 36 | Calling an RPC cancelled if it completes for entirely expected reasons is confusing. Generally speaking, this distinction has not been noticed at the API layers but will become more common once the C++ callback API becomes formalized (since that has an OnCancel reaction for server RPCs). As a result, this issue should be clarified and fixed before the C++ callback API becomes common. 37 | 38 | ## Implementation 39 | 40 | The implementation is contained entirely within a single pull request at https://github.com/grpc/grpc/pull/22991 . This pull request does the following: 41 | 42 | 1. Decide if a server call is canceled by whether or not it could successfully send status 43 | 1. Fix core tests (wrapped language tests already had the proper expectations) 44 | 1. Mark a call failed according to channelz if it was canceled or if the server sent a non-OK status 45 | 1. Add a C++ test to validate that OnCancel is not called on non-OK explicit status 46 | 1. Fix the core surface comment about this topic and bump the core major API version 47 | 1. Add a C++ comment clarifying the meaning of `IsCancelled` 48 | 49 | ## Open issues (if applicable) 50 | 51 | * Does this affect the core wrapped language APIs? 52 | - This distinction is not visible to Ruby or the C++ synchronous API at all since the RPC in both of those cases is complete when returning status. Thus cancellation checks just see the effect of other forms of cancellation. 53 | - For the C++ asynchronous API, `IsCancelled` currently doesn't define its result. This change also includes an explciit definition of cancellation for C++. 54 | - Python does not expose this functionality to the server application. 55 | - In C#, cancellation is expressed through the language-level [`CancellationToken`](https://docs.microsoft.com/en-us/dotnet/api/system.threading.cancellationtoken) and its `IsCancellationRequested` method (or the related `ThrowIfCancellationRequested`). The API for this object indicates that `IsCancellationRequested` should be true if the `CancellationToken` has its `Cancel` method called. Nowhere in the language-level or gRPC API is there any indication that a cancellation should be observable if the method handler throws an error status, so this change preserves API. (In practice, well-behaved codes have no reason to access a copy of a method handler's `ServerCallContext` object after return or throw from the handler, and such behavior isn't tested.) 56 | - PHP and Objective-C wrap core but are not server languages, so this issue does not apply to them. 57 | 58 | * Should this only trigger on explicit cancellations? 59 | - In practice, it doesn't matter if the application requested cancellation or cancellation happened because of a deadline or network problem; in any case, the RPC is failed for reasons out of the scope of the RPC. In practice, issues like deadline exceeded could be implemented using cancellation, so there is no strong distinction in any case. 60 | 61 | * Should server-side cancellations mark an RPC cancelled since they are an explicit part of the server operation? 62 | - In practice, server-side completion is preferred over cancellation even on failed RPCs. Cancellation should be used for unusual or unexpected cases (such as a need to promptly release resources), so that should still mark RPCs cancelled. Additionally, the name would make it non-intuitive to treat server-side cancellations as anything other than cancelled RPCs. 63 | 64 | -------------------------------------------------------------------------------- /L7-go-metadata-api.md: -------------------------------------------------------------------------------- 1 | Go Metadata API Change 2 | ---- 3 | * Author(s): Doug Fawley 4 | * Approver: a11r 5 | * Status: Ready for Implementation 6 | * Last updated: 2017/05/05 7 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/-qM_UnYFHJs 8 | 9 | ## Abstract 10 | 11 | Remove the `FromContext` and `NewContext` functions from the `grpc/metadata` 12 | package. 13 | 14 | ## Background 15 | 16 | As documented in 17 | issue [grpc/grpc-go#1148](https://github.com/grpc/grpc-go/issues/1148), metadata 18 | was forwarded automatically by any Go server using the incoming context when 19 | performing outgoing gRPC calls, which is the standard practice for handling 20 | contexts. This behavior represents a security risk, as it exposes potentially 21 | sensitive information in the metadata, e.g. authentication certificates. 22 | 23 | In PR [grpc/grpc-go#1157](https://github.com/grpc/grpc-go/pull/1157), this 24 | security issue was fixed by separating the incoming and outgoing metadata in the 25 | context. A new API was introduced to set and retrieve these two sets of 26 | metadata. The old API was left in place to support backward compatibility, with 27 | the assumption that callers of `metadata.FromContext` were intending to retrieve 28 | the incoming metadata and callers of `metadata.NewContext` were intending to set 29 | the outgoing metadata. Unfortunately, this is not the case for interceptors -- 30 | client interceptors typically intend to retrieve the outgoing context (to verify 31 | or extend it), and server interceptors typically intend to add additional 32 | metadata to the incoming context -- and tests might reasonably intend anything. 33 | As a result, existing interceptors were broken (see 34 | issue [grpc/grpc-go#1219](https://github.com/grpc/grpc-go/issues/1219) for one 35 | such example). 36 | 37 | ## Proposal 38 | 39 | Because it is impossible to know the intentions of the caller, we propose to 40 | break backward compatibility with the old API by removing the `FromContext` and 41 | `NewContext` functions from the metadata package. This will force maintainers 42 | to determine which metadata was intended, and update their code accordingly. 43 | The existing `FromIncomingContext`, `FromOutgoingContext` (rare), 44 | `NewIncomingContext` (rare), and `NewOutgoingContext` should be used instead to 45 | read and set metadata. 46 | 47 | ## Rationale 48 | 49 | Because the original API assumed only one copy of metadata was present in the 50 | context, no alternative was identified that could both maintain backward 51 | compatibility and not present a security risk. 52 | 53 | ## Implementation 54 | 55 | The implementation is straightforward: simply remove the functions. If any 56 | usages remain within grpc itself, they will be inspected and updated on a 57 | case-by-case basis. 58 | -------------------------------------------------------------------------------- /L72-core-google_default_credentials-extension.md: -------------------------------------------------------------------------------- 1 | Allow Call Credentials to be Specified in `grpc_google_default_credentials_create` 2 | ---- 3 | * Author(s): rbellevi 4 | * Approver: markdroth 5 | * Status: Draft 6 | * Implemented in: Core 7 | * Last updated: July 8th, 2020 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/fZNm4pU8e3s/m/2Be8u1n7BQAJ 9 | 10 | ## Abstract 11 | 12 | This document proposes that `grpc_google_default_credentials_create` be 13 | amended to allow the user to specify their desired call credentials. 14 | 15 | ## Background 16 | 17 | The Google default credentials created by the 18 | `grpc_google_default_credentials_create` function in Core enable connection to 19 | Google services via a combination of ALTS and SSL credentials, along with a special oauth2 20 | token that, by default, asserts the same identity as the channel-level ALTS credential. 21 | The ALTS credential will use the identity in a token gathered from a request to 22 | the 23 | `http://metadata.google.internal/computeMetadata/v1/project/service-accounts/default/token` 24 | endpoint. 25 | 26 | In C++, auth is handled by the gRPC library itself. In wrapped 27 | languages such as Python, however, auth is handled by external libraries which 28 | incur a dependency on gRPC, such as [`google-auth-library-python`](https://github.com/googleapis/google-auth-library-python). 29 | These libraries have their own implementation of the 30 | [Application Default Credentials](https://cloud.google.com/docs/authentication/production?_ga=2.68587985.1354052904.1594166352-2074181900.1593114348#finding_credentials_automatically) 31 | (ADC) mechanism, which uses the following strategy to create credentials: 32 | 33 | 1. First, ADC checks to see if the environment variable 34 | `GOOGLE_APPLICATION_CREDENTIALS` is set. If the variable is set, ADC uses 35 | the service account file that the variable points to. 36 | 37 | 2. If the environment variable isn't set, ADC uses the default service account 38 | that Compute Engine, Google Kubernetes Engine, Cloud Run, App Engine, and 39 | Cloud Functions provide, for applications that run on those services. 40 | 41 | 3. If ADC can't use either of the above credentials, an error occurs. 42 | 43 | Thus, if an auth library were to use the current version of 44 | `grpc_google_default_credentials_create`, this ADC 45 | logic would be duplicated between the auth library and gRPC Core. 46 | 47 | By default, the identity pulled from the `metadata.google.internal` endpoint and 48 | the identity from the ADC mechanism will align. 49 | 50 | ## Proposal 51 | 52 | I propose that the signature of `grpc_google_default_credentials_create` be 53 | amended to the following: 54 | 55 | ```C 56 | GRPCAPI grpc_channel_credentials* 57 | grpc_google_default_credentials_create(grpc_call_credentials* call_credentials); 58 | ``` 59 | 60 | Supplying `nullptr` for `call_credentials` will result in the current behavior 61 | of the function. That is, Core will attach a compute engine call credential 62 | based on the Application Default Credentials mechanism. 63 | 64 | ## Rationale 65 | 66 | A first attempt at this problem was the addition of a new API very similar to 67 | `grpc_google_default_credentials_create`, but it was determined that too much 68 | was duplicated by this implementation. 69 | 70 | It is possible that the call credentials provided by the caller are not compute 71 | engine credentials or do not assert the identity of the default service account 72 | of the VM. Ideally, a programmatic check would verify that no such credentials 73 | are passed in. Unfortunately, the type of credentials passed in are opaque to 74 | both Core and the gRPC wrapped language library, making such a check impossible. 75 | A prominent warning will be added to the documentation for the function to warn 76 | users of such pitfalls. 77 | 78 | ## Implementation 79 | 80 | The implementation of this proposal will be carried out in [this PR.](https://github.com/grpc/grpc/pull/23203) 81 | 82 | ## Open issues (if applicable) 83 | 84 | N/A 85 | -------------------------------------------------------------------------------- /L75-core-remove-grpc-channel-ping.md: -------------------------------------------------------------------------------- 1 | Remove grpc_channel_ping from Core Surface API 2 | ---- 3 | * Author(s): yashykt 4 | * Approver: markdroth 5 | * Status: Final 6 | * Implemented in: Core https://github.com/grpc/grpc/pull/23894 7 | * Last updated: 2020-08-19 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/suC7gw_WOa4 9 | 10 | ## Abstract 11 | 12 | Remove `grpc_channel_ping` from the core surface API. 13 | 14 | ## Background 15 | 16 | `grpc_channel_ping` allows for the application to send a ping over the channel. 17 | 18 | ### Related Proposals: 19 | 20 | N/A 21 | 22 | ## Proposal 23 | 24 | Remove `grpc_channel_ping` from the core surface API. 25 | 26 | ## Rationale 27 | 28 | `grpc_channel_ping` is not used outside of tests, so there is no reason for it to be a part of the surface API. 29 | 30 | ## Implementation 31 | 32 | Core: https://github.com/grpc/grpc/pull/23894 33 | 34 | ## Open issues (if applicable) 35 | 36 | N/A 37 | 38 | -------------------------------------------------------------------------------- /L78-python-rich-server-context.md: -------------------------------------------------------------------------------- 1 | Expose rich server context 2 | ---- 3 | * Author(s): stpierre 4 | * Approver: gnossen 5 | * Status: Draft 6 | * Implemented in: [python](https://github.com/grpc/grpc/pull/25457) 7 | * Last updated: 2021-02-17 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/ctZJkd4MfHg 9 | 10 | ## Abstract 11 | 12 | Expose context status code, details, and trailing metadata on the 13 | server side. 14 | 15 | ## Background 16 | 17 | Currently, on the server-side `ServicerContext` object, status code, 18 | details, and trailing metadata are private. As a result, an 19 | interceptor cannot take action based on any of those properties. 20 | 21 | ## Proposal 22 | 23 | We will add three methods to the servicer context interface 24 | (`grpc.ServicerContext`), and synchronous (`grpc._server._Context`) 25 | and AIO server (`grpc.aio.ServicerContext`) implementations: 26 | 27 | * `code()` returns the status code. 28 | * `details()` returns the state details. 29 | * `trailing_metadata()` returns the trailing metadata. 30 | 31 | All three are functions (not properties) for symmetry with the stub 32 | context class. 33 | 34 | ## Rationale 35 | 36 | One use case for that is a logging interceptor that logs the status of 37 | every response; ideally, with something like `grpc-interceptor`, you'd 38 | want to do something like: 39 | 40 | ```python 41 | def intercept( 42 | self, 43 | method: Callable, 44 | request: Any, 45 | context: grpc.ServicerContext, 46 | method_name: str, 47 | ) -> Any: 48 | LOG.info('[REQUEST] %s', method_name) 49 | 50 | try: 51 | retval = method(request, context) 52 | except Exception: 53 | LOG.info('[RESPONSE] %s (%s): %s', method_name, context.code(), context.details()) 54 | raise 55 | else: 56 | LOG.info('[RESPONSE] %s: OK', method_name) 57 | return retval 58 | ``` 59 | 60 | But, of course, `context.code()` and `context.details()` don't exist, and 61 | are instead the private attributes `context._state.code` and 62 | `context._state.details`. 63 | 64 | Why add getter methods to the ServicerContext when the author of the 65 | service handler is the one that calls the corresponding setter 66 | methods? They could keep track of what they set 67 | themselves. Interceptor authors, however, don't have this 68 | option. They're not necessarily in control of the service handler. So 69 | it makes sense to make these getters available to them. 70 | 71 | But once we add the getters to the interceptors, we want them added to 72 | the non-interceptor `ServicerContext` to maintain uniformity between 73 | the two interfaces. 74 | 75 | https://github.com/grpc/grpc/issues/24605 sketches out an earlier 76 | iteration of this. 77 | 78 | https://github.com/grpc/grpc/pull/25600 is an (unrelated but relevant) 79 | PR that adds most of the functionality for asyncio; that work will be 80 | used in the implementation. 81 | 82 | ## Implementation 83 | 84 | 1. Add the new API calls. https://github.com/grpc/grpc/pull/25457 holds an 85 | incomplete draft. 86 | -------------------------------------------------------------------------------- /L79-cpp-byte-buffer-slice-methods.md: -------------------------------------------------------------------------------- 1 | C++ API changes on ByteBuffer and Slice 2 | ---- 3 | * Author(s): 4 | * Approver: vjpai 5 | * Status: Approved 6 | * Implemented in: C++ 7 | * Last updated: 2021-04-19 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/OsAYW1mDJ9w 9 | 10 | ## Abstract 11 | 12 | To better support custom serialization protocols besides protobuf, additional methods will be added to `grpc::ByteBuffer` and `grpc::Slice` those serialization protocols to access data without copying data. 13 | This proposal purely addes new methods so there shouldn't be any breaking changes. 14 | 15 | ## Background 16 | 17 | FlatBuffers has been using gRPC core methods to exchange data with gRPC but it's not ideal because gRPC core doesn't promise API stability, which broke it from time to time. 18 | To address this problem, FlatBuffers needs to use public gRPC C++ API which will be newly added to provide it more control on memory management of `grpc::ByteBuffer` and `grpc::Slice` so that it can get more stability without performance impact. 19 | We believe this approach will also be useful for other user-defined SerializationTraits. 20 | 21 | Related issues: 22 | - https://github.com/grpc/grpc/issues/20594 23 | - https://github.com/google/flatbuffers/issues/5836 24 | 25 | ### Related Proposals: 26 | 27 | N/A 28 | 29 | ## Proposal 30 | 31 | ### grpc::ByteBuffer 32 | 33 | `grpc::ByteBuffer` is going to have two additional methods. `TrySingleSlice` will returns a single `grpc::Slice` if it's made up with an uncompressed slice like `absl::Cord::TryFlat` ([code](https://github.com/abseil/abseil-cpp/blob/732c6540c19610d2653ce73c09eb6cb66da15f42/absl/strings/cord.h#L639)). This is useful when accessing data without copying data via the returned slice. Otherwise it fails. 34 | Another method, `DumpToSingleSlice` returns a newly created slice containing whole data by copying underlying slices into a single one. This will simplify how to read data from `grpc::ByteBuffer`. 35 | 36 | ``` 37 | Status TrySingleSlice(Slice* slice) const; 38 | Status DumpToSingleSlice(Slice* slice) const; 39 | ``` 40 | 41 | ### grpc::Slice 42 | 43 | `grpc::Slice` is going to have one simple method, `sub` returning a sub-slice of given Slice and the span. 44 | 45 | ``` 46 | Slice sub(size_t begin, size_t end) const; 47 | ``` 48 | 49 | ## Rationale 50 | 51 | The list of APIs to be added here is based on the requirement of Flatbuffers to reimplement its plugin using gRPC C++ APIs only. These APIs are already in `absl::Cord` and `absl::string_view` so these are considered acceptable to have in gRPC as well. 52 | 53 | ## Implementation 54 | 55 | Implementation will be done by https://github.com/grpc/grpc/pull/26014 and followings are excerpts from the PR to give an idea on how those functions will be implemented. 56 | 57 | ### grpc::ByteBuffer 58 | 59 | ``` 60 | Status ByteBuffer::TrySingleSlice(Slice* slice) const { 61 | if (!buffer_) { 62 | return Status(StatusCode::FAILED_PRECONDITION, "Buffer not initialized"); 63 | } 64 | if ((buffer_->type == GRPC_BB_RAW) && 65 | (buffer_->data.raw.compression == GRPC_COMPRESS_NONE) && 66 | (buffer_->data.raw.slice_buffer.count == 1)) { 67 | grpc_slice internal_slice = buffer_->data.raw.slice_buffer.slices[0]; 68 | *slice = Slice(internal_slice, Slice::ADD_REF); 69 | return Status::OK; 70 | } else { 71 | return Status(StatusCode::FAILED_PRECONDITION, 72 | "Buffer isn't made up of a single uncompressed slice."); 73 | } 74 | } 75 | 76 | Status ByteBuffer::DumpToSingleSlice(Slice* slice) const { 77 | if (!buffer_) { 78 | return Status(StatusCode::FAILED_PRECONDITION, "Buffer not initialized"); 79 | } 80 | grpc_byte_buffer_reader reader; 81 | if (!grpc_byte_buffer_reader_init(&reader, buffer_)) { 82 | return Status(StatusCode::INTERNAL, 83 | "Couldn't initialize byte buffer reader"); 84 | } 85 | grpc_slice s = grpc_byte_buffer_reader_readall(&reader); 86 | *slice = Slice(s, Slice::STEAL_REF); 87 | grpc_byte_buffer_reader_destroy(&reader); 88 | return Status::OK; 89 | } 90 | ``` 91 | 92 | ### grpc::Slice 93 | 94 | ``` 95 | /// Returns a substring of the `slice` as another slice. 96 | Slice Slice::sub(size_t begin, size_t end) const { 97 | return Slice(g_core_codegen_interface->grpc_slice_sub(slice_, begin, end), 98 | STEAL_REF); 99 | } 100 | ``` 101 | -------------------------------------------------------------------------------- /L84-cpp-call-failed-before-recv-message.md: -------------------------------------------------------------------------------- 1 | L84: Add grpc_call_failed_before_recv_message() to C-core API 2 | ---- 3 | * Author: alishananda 4 | * Approver: ctiller 5 | * Status: In Review 6 | * Implemented in: C++ 7 | * Last updated: 2021-08-27 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/UNBHGr6feeY 9 | 10 | ## Abstract 11 | 12 | There is a race between `ServerContext::IsCancelled` and `Read` in the sync API where a call is cancelled and `Read` returns false but `IsCancelled` also returns false. We need to ensure we propagate the cancellation to `IsCancelled` so it always returns true if `Read` returns false and the call was cancelled. 13 | 14 | As part of this fix, we have to move `grpc_call_failed_before_recv_message` into the C-core surface API so the implementation of `MaybeMarkCancelledOnRead` can be inlined and found by proto_library. 15 | 16 | ## Background 17 | 18 | A user reported that after cancelling a call, `IsCancelled` would sometimes return false when `Read` would also return false. 19 | 20 | We saw this same issue with the callback API, where the server couldn't tell whether `OnReadDone` returned false because the stream was cancelled or actually closed cleanly. [This PR](https://github.com/grpc/grpc/pull/26245) added a core subsurface `grpc_call_failed_before_recv_message` function in `src/core/lib/surface/call.h` and `MaybeMarkCancelledOnRead` function depending on this value in `src/cpp/server/server_context.cc`. Together, these distinguish a cancellation from a cleanly closed stream by marking places in the transport where `OnReadDone` will return an empty message because of a stream failure and using `grpc_call_failed_before_recv_message` to propagate this information out of the grpc_call struct. 21 | 22 | ### Related Proposals: 23 | N/A 24 | 25 | ## Proposal 26 | 27 | The proposal is to move `grpc_call_failed_before_recv_message` from subsurface to the C-core surface API. 28 | 29 | ## Rationale 30 | 31 | It seems that the build structure of the callback API is different than that of the sync API. When using `MaybeMarkCancelledOnRead` with the Sync API, the proto library returns a linker error that it cannot find the function because the proto library does not depend on the file implementing the function. We thus have to inline `MaybeMarkCancelledOnRead` in `server_context.h`, which will mean adding the `grpc_call_failed_before_recv_message` function to our public API. 32 | 33 | 34 | ## Implementation 35 | 36 | We will move `grpc_call_failed_before_recv_message` out of `call.h` and into the C-core surface API. 37 | 38 | The PR with this implementation is https://github.com/grpc/grpc/pull/27056. 39 | -------------------------------------------------------------------------------- /L86-aspect-based-python-bazel-rules.md: -------------------------------------------------------------------------------- 1 | Aspect-based Python Bazel Rules 2 | ---- 3 | * Author(s): Michael Beardsworth 4 | * Approver: gnossen 5 | * Status: Draft 6 | * Implemented in: Bazel and Starlark 7 | * Last updated: Sept. 8, 2021 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/NXdtW5xe9Js 9 | 10 | ## Abstract 11 | 12 | Proposes modifications to gRPC's Bazel rules for Python codegen to improve 13 | dependency tracking. 14 | 15 | ## Background 16 | 17 | * gRPC provides build rule logic written in Starlark to support Protobuf and gRPC Python codegen with Bazel. 18 | * These rules (`py_proto_library` and `py_grpc_library`) each generate code from a single `proto_library`'s protos. 19 | * The produced Python libraries (or equivalently-functioning PyInfo providers) do not express dependencies on the generated Python code for the `proto_library`'s dependencies. 20 | * Users are surprised by this behavior. See [#23769](https://github.com/grpc/grpc/issues/23769) for an example bug. 21 | * This behavior is particularly surprising for Google-internal users. The internal versions of `py_proto_library` and `py_grpc_library` propagate Python dependencies. 22 | 23 | ### Related Proposals: 24 | `py_proto_library` and `py_grpc_library` are implemented in [python_rules.bzl](https://github.com/grpc/grpc/blob/master/bazel/python_rules.bzl). There doesn't seem to be a proposal for that, though. 25 | 26 | ## Proposal 27 | 28 | * We propose rebuilding the `py_proto_library` and `py_grpc_library` rules to use [aspects](https://docs.bazel.build/versions/main/skylark/aspects.html) to perform code generation steps. 29 | This approach corresponds to the Google-internal design for these rules. 30 | * The `_gen_py_proto_aspect` aspect visits `proto_library` rules to generate Python code for Protobuf. 31 | gRPC-owned code is still responsible for generating the Python code. 32 | * The aspect produces a custom [providers](https://docs.bazel.build/versions/main/skylark/rules.html#providers) `PyProtoInfo` that wraps a `PyInfo` provider to avoid creating spurious dependencies for Python users that interface with the `proto_library` rules through some other means. 33 | * The `py_proto_library` and `py_grpc_library` rules will only be responsible for collecting the `PyInfo` providers from their dependencies. 34 | * The `plugin` attribute must be removed from `py_proto_library`. Aspects require the declaration of all possible parameter values up front, so it would not be possible for the new aspects to continue supporting arbitrary plugins. (Note that the plugin feature is not used in gRPC. It was introduced to support [GAPIC](https://github.com/googleapis/gapic-generator-python), which no longer uses the feature.) 35 | * In some use cases in gRPC [e.g. grpcio_channelz](https://github.com/grpc/grpc/blob/master/src/python/grpcio_channelz/grpc_channelz/v1/BUILD.bazel), the `py_proto_library` rule is located in a different package than the corresponding `proto_library` rule. 36 | This rule layout is needed to generate Python code with import paths that match the Python package layout, rather than the directory structure containing the `.proto` files. 37 | Since aspect-based code generation associates the generated code with the Bazel package (i.e. repository path) of the `proto_library` rule rather than the `py_proto_library` rule, we need special handling for this case. 38 | When the `py_proto_library` is in a different Bazel package than the `proto_library` rule, we generate an additional set of Python files that import the generated Python files under the old convention. 39 | Additionally, an `imports` attribute is added, to allow the caller to add import paths (similar to the behavior of `py_library`. 40 | With these two changes, existing Python code can remain unmodified, with a minimal increase in BUILD file complexity. 41 | * No behavior change should be observed by the user of `py_proto_library` or `py_grpc_library` unless they rely on the (removed) `plugin` attribute, or if they use the new `imports` attribute. 42 | 43 | ## Rationale 44 | 45 | The proposed approach addresses the open bug and corrects the dependency issues. However, it requires the removal of a (likely unused) feature of the build rules. 46 | 47 | Alternatively a set of repository rules could be introduced that allow users to inject `py_proto_library` and `py_grpc_library` implementations into the gRPC Bazel build system. 48 | This alternative approach would allow users to work around the dependency issues by taking on additional burden. 49 | The new build logic could be moved into a separate repository, or potentially upstreamed to a Bazel-owned repository. 50 | 51 | ## Implementation 52 | 53 | The implementation is mostly complete by beardsworth in [#27275](https://github.com/grpc/grpc/pull/27275). 54 | 55 | ### Testing 56 | 57 | Existing unit tests cover the current API surface. 58 | An additional test case will be added to the Bazel [Python test repo](https://github.com/grpc/grpc/tree/master/bazel/test/python_test_repo) that covers transitive dependency resolution. 59 | 60 | ## Open issues (if applicable) 61 | 62 | It is difficult to determine how many users rely on the `plugin` attribute. If many users rely on this behavior then it may not be feasible to adopt this proposal. 63 | -------------------------------------------------------------------------------- /L88-cpp-absl-status-conversions.md: -------------------------------------------------------------------------------- 1 | Provide conversions from grpc::Status to absl::Status and back again 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: roth 5 | * Status: {In Review} 6 | * Implemented in: https://github.com/grpc/grpc/pull/27903 7 | * Last updated: November 1, 2021 8 | * Discussion at: 9 | 10 | ## Abstract 11 | 12 | Add conversion operations between absl::Status and grpc::Status to help boost interoperability with that library, and to aid in modernizing our codebase. 13 | 14 | ## Background 15 | 16 | grpc::Status was originally intended to be a polyfill for absl::Status. 17 | The released absl::Status however was not directly compatible with grpc::Status. 18 | Introduce canonical conversion operations between the two types. 19 | 20 | ### Related Proposals: 21 | 22 | N/A 23 | 24 | ## Proposal 25 | 26 | Add to grpc::Status: 27 | - an explicit conversion from absl::Status 28 | - implicit const& and && conversion operators to absl::Status 29 | 30 | i.e.: 31 | 32 | ``` 33 | namespace grpc { 34 | class Status { 35 | public: 36 | // ... 37 | explicit Status(absl::Status&&); 38 | operator const absl::Status&() const; 39 | }; 40 | } 41 | ``` 42 | 43 | ## Rationale 44 | 45 | We'd like to change to absl::Status everywhere, but we can't. 46 | Internal users are experiencing pain since these conversions are available there, but not externally. 47 | 48 | ## Implementation 49 | 50 | This proposal will be implemented as a single PR (and likely an associated cherry-pick internally). 51 | 52 | ## Open issues (if applicable) 53 | 54 | N/A 55 | 56 | -------------------------------------------------------------------------------- /L91-improved-directory-support-for-python-bazel-rules.md: -------------------------------------------------------------------------------- 1 | Improved handling of subdirectories and external repositories for py_proto_library 2 | ---- 3 | * Author(s): Thomas Köppe (tkoeppe) 4 | * Approver: gnossen 5 | * Status: Draft 6 | * Implemented in: Starlark 7 | * Last updated: 2021-11-29 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/paqz77BJ0Js 9 | 10 | ## Abstract 11 | 12 | The current implementation of `py_proto_library` fails if a dependency of a 13 | proto is located in a strict subdirectory of its package. This is because it 14 | uses `basename` to derive paths of generated files, which discards directory 15 | information. The proposal is to change the implementation to a more general 16 | prefix removal that retains the package-relative directory structure. 17 | 18 | Furthermore, the current implementation fails if a proto is taken from an 19 | external repository and has generated sources. This time this is due to a use of 20 | `rsplit` that discards directory information and results in the wrong import 21 | path being added to the list of PYTHONPATHs. We propose to use the same improved 22 | prefix removal to compute the correct import path. 23 | 24 | ## Background 25 | 26 | The current Starlark implementation of `py_proto_library` appears to make 27 | certain assumptions on the location of `.proto` files (namely 1 that they are 28 | located in the root of their package, and 2 that the are not "virtual" 29 | (i.e. generated) if they come from an external repository); these assumptions 30 | are combined with knowledge of Bazel's output tree structure to derive the 31 | locations of various generated files. The current logic discards part of the 32 | proto's source information that is necessary to make both proto imports and 33 | Python module imports work. 34 | 35 | **An example.** 36 | 37 | ```python 38 | proto_library( 39 | name = "p2_proto", 40 | srcs = "subdir/p2.proto", 41 | ) 42 | 43 | proto_library( 44 | name = "p1_proto", 45 | srcs = "p1.proto", 46 | deps = [ 47 | ":p2_proto", # p1 contains: 'import "subdir/p2.proto";' 48 | "elsewhere@//some/place:q_proto" # p1 contains: 'import "some/place/q.proto";' 49 | ], 50 | ) 51 | 52 | # Currently broken. Should be expected to work. 53 | py_proto_library( 54 | name = "p1_proto_pb2", 55 | srcs = [":p1_proto"], 56 | ) 57 | ``` 58 | 59 | ### Related Proposals: 60 | 61 | (None.) 62 | 63 | ## Proposal 64 | 65 | See pull requests: 66 | 67 | * https://github.com/grpc/grpc/pull/28040 introduces the more general 68 | `_make_prefix` helper function to compute the prefix in the output tree of the 69 | proto file (regardless of whether the file is on disk or generated, or in the 70 | same repository or in an external one). Then, `source_file.basename` is 71 | replaced with `source_file.path[len(prefix):]`. 72 | * https://github.com/grpc/grpc/pull/28103 fixes the import path that is added to 73 | the PYTHONPATH list so as to add the correct path that contains the generated 74 | Python module for a proto library from an external repository whose sources 75 | are generated (a so-called "virtual import"). 76 | 77 | ## Rationale 78 | 79 | It appears to be an outright defect of the current implementation that certain 80 | kinds of `proto_library` dependencies are not supported. Nothing in the current 81 | API suggests constraints on `proto_library` dependencies; this proposal adjusts 82 | the implementation to conform to the API. 83 | 84 | ## Implementation 85 | 86 | The proposal has been implemented; see list of pull requests above. 87 | -------------------------------------------------------------------------------- /L93-node-securecontext-creds.md: -------------------------------------------------------------------------------- 1 | L92: Add a new Node API to create credentials from a SecureContext 2 | ---- 3 | * Author(s): murgatroid99 4 | * Approver: wenbozhu 5 | * Implemented in: Node.js (grpc-js) 6 | * Last updated: 2021-12-10 7 | * Discussion at: (filled after thread exists) 8 | 9 | ## Abstract 10 | 11 | Add a new channel credentials creation API to grpc-js that uses the `SecureContext` type defined in Node's built in TLS module. 12 | 13 | ## Background 14 | 15 | The existing `credentials.createSsl` API in the Node library handles a specific set of basic parameters based on the parameters handled by the corresponding API in the gRPC core library. Some Node users have requested the ability to use other parameters that Node's TLS APIs support, but gRPC does not. In particular: 16 | 17 | - [grpc/grpc-node#1712](https://github.com/grpc/grpc-node/issues/1712): The user wants to be able to pass in a passphrase to handle encrypted keys. 18 | - [grpc/grpc-node#1802](https://github.com/grpc/grpc-node/issues/1802): The user is apparently able to solve their problem by changing the TLS version and cipher list. 19 | - [grpc/grpc-node#1977](https://github.com/grpc/grpc-node/issues/1977): The user wants to pass the private key and certificate chain combined in the PFX format. 20 | 21 | ## Proposal 22 | 23 | The Node grpc-js library will add the function `credentials.createFromSecureContext`, which will take as arguments a [`SecureContext`](https://nodejs.org/api/tls.html#tlscreatesecurecontextoptions) and an optional `VerifyOptions` object (the final parameter of the existing `credentials.createSsl`). gRPC has a standard chiper list, but the cipher list is part of the secure context, so these credentials objects will use Node's default cipher list instead of gRPC's default cipher list. For the same reason, these credentials will use Node's default root certs list instead of gRPC's. 24 | 25 | The following two calls are almost equivlaent, other than the cipher list, demonstrating the correspondence between the existing and new APIs: 26 | 27 | ```ts 28 | // Existing API 29 | credentials.createSsl(rootCerts, privateKey, certChain, verifyOptions); 30 | 31 | // New API 32 | credentials.createFromSecureContext(tls.createSecureContext({ 33 | ca: rootCerts, 34 | key: privateKey, 35 | cert: certChain 36 | }), verifyOptions); 37 | ``` 38 | 39 | ## Rationale 40 | 41 | Internally, grpc-js's secure channel credentials implementation uses the built in Node TLS APIs, but access to those features is restricted by the API gRPC provides. The referenced feature requests want to use specific features in Node's TLS API that gRPC does not expose. The simplest way to address both those requests and any similar future requests is to directly accept any `SecureContext` the user can create. 42 | 43 | 44 | ## Implementation 45 | 46 | This is implemented in grpc-js in [PR #1988](https://github.com/grpc/grpc-node/pull/1988) -------------------------------------------------------------------------------- /L94-core-eliminate-slice-interning.md: -------------------------------------------------------------------------------- 1 | Eliminate Slice Interning 2 | ---- 3 | * Author(s): ctiller@google.com 4 | * Approver: roth@google.com 5 | * Status: Approved 6 | * Implemented in: Core 7 | * Last updated: 1/11/2022 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/nJYz60XHLuo 9 | 10 | ## Abstract 11 | 12 | Remove slice interning related APIs. 13 | 14 | ## Background 15 | 16 | gRPC Core used to rely on slice interning to improve metadata handling performance. 17 | Recent advancements in the metadata system design have obviated the need for this, and so it's time to remove the API complexity incurred by this feature. 18 | 19 | ### Related Proposals: 20 | None. 21 | 22 | ## Proposal 23 | 24 | Remove the following APIs: 25 | * grpc_slice_intern 26 | * grpc_slice_default_hash_impl 27 | * grpc_slice_default_eq_impl 28 | * grpc_slice_hash 29 | 30 | The first removes the ability to create an interned slice. 31 | Because interned slices had opinions about hashing and equality (since they could optimize those operations) we introduced hashing and equality hooks when interning was introduced. 32 | These hooks are no longer required, so we'll remove the gRPC opinion on how slices should be hashed - wrapped languages are of course free to choose their own hash function for slice data. 33 | 34 | ## Rationale 35 | 36 | We don't need this anymore. 37 | 38 | ## Implementation 39 | 40 | This is implemented in https://github.com/grpc/grpc/pull/28363. 41 | -------------------------------------------------------------------------------- /L95-python-reflection-client.md: -------------------------------------------------------------------------------- 1 | Python Reflection Client 2 | ---- 3 | * Author: Tomer Vromen 4 | * Approver: lidizheng, gnossen 5 | * Status: In Review 6 | * Implemented in: Python 7 | * Last updated: 2022-01-20 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/1fAWSxCy2mM 9 | 10 | ## Abstract 11 | 12 | Add a programmatic way to access reflection descriptors from client-side Python. 13 | 14 | ## Background 15 | 16 | Reflection is a means by which a server can describe which services and messages it supports. 17 | The Reflection protocol is already part of the [gRPC repository](https://github.com/grpc/grpc/blob/master/doc/server-reflection.md), 18 | and there are server-side implementations of it in several languages, including C++ and Python - see [gRPC Python Server Reflection](https://github.com/grpc/grpc/blob/master/doc/python/server_reflection.md) 19 | 20 | However, the only client-side implementation is in [C++](https://github.com/grpc/grpc/blob/master/doc/server_reflection_tutorial.md#use-server-reflection-in-a-c-client). 21 | It seems that a client-side Python implementation is a missing to complete the picture. 22 | 23 | This proposal closes this gap. 24 | 25 | ### Related Proposals: 26 | 27 | * Loosely related: [Promote the Reflection Service from v1alpha to v1](https://github.com/grpc/proposal/blob/master/A15-promote-reflection.md) 28 | - The proposal discusses the stability of the reflection service. 29 | - Accepted but not yet implemented - see [#27957](https://github.com/grpc/grpc/pull/27957). 30 | 31 | ## Proposal 32 | 33 | Provide a Python implementation for client-side reflection, modeled after the existing C++ implementation. 34 | 35 | * Implement `ProtoReflectionDescriptorDatabase` which implements the 36 | [`DescriptorDatabase`](https://googleapis.dev/python/protobuf/latest/google/protobuf/descriptor_database.html#google.protobuf.descriptor_database.DescriptorDatabase) 37 | interface. 38 | * Write tests. 39 | * Write documentation. 40 | 41 | ## Rationale 42 | 43 | Python provides an easy interface for reflection, due to its dynamic nature. 44 | For example, retrieving the descriptor for a service can be as simple as 45 | ```Python 46 | service_desc = desc_pool.FindServiceByName("helloworld.Greeter") 47 | ``` 48 | 49 | The alternative is that anyone who wishes to have a Python reflection client will have to implement this by themselves. 50 | 51 | Adding proper tests to the codebase ensures correctness even when (if) the reflection protocol changes. 52 | 53 | The main downside is having a bigger API surface to support. 54 | 55 | Related discussion: [how to get reflected services and rpc interface in grpc-python?](https://groups.google.com/g/grpc-io/c/SS9pkHMiLK4/m/OcwtakfqBQAJ) 56 | 57 | ## Implementation 58 | 59 | Implementation is already available as a PR [#28443](https://github.com/grpc/grpc/pull/28443). 60 | 61 | -------------------------------------------------------------------------------- /L96_graphics/diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L96_graphics/diagram.png -------------------------------------------------------------------------------- /L98-requiring-cpp14.md: -------------------------------------------------------------------------------- 1 | L99: Requiring C++14 in gRPC Core/C++ Library 2 | ---- 3 | * Author(s): veblush 4 | * Approver: markdroth 5 | * Status: Approved 6 | * Implemented in: n/a 7 | * Last updated: May 19, 2022 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/cpSVzf3rZYY 9 | 10 | ## Abstract 11 | 12 | gRPC starts requiring C++14. 13 | 14 | ## Background 15 | 16 | gRPC has been requiring C++11 from 2017 by 17 | [Allow C++ in gRPC Core Library](L6-core-allow-cpp.md). As all compilers 18 | that gRPC supports are now capable of C++14, gRPC is going to require 19 | C++14 to benefit from new C++14 features. 20 | 21 | This is aligned with 22 | [the OSS Foundational C++ support policy](https://opensource.google/documentation/policies/cplusplus-support) 23 | that says 24 | 25 | * We will support all modern C++ standards, from some oldest standard to the 26 | newest. 27 | 28 | * We will drop support for our oldest supported C++ standard when one of the 29 | following happens: 30 | * All supported compilers default to a newer version 31 | * When 10 years pass since the standard's release date 32 | 33 | ## Proposal 34 | 35 | gRPC 1.46 will be the last release supporting C++11, future releases will 36 | require C++ >= 14. We plan to backport critical (P0) bugs and security fixes 37 | to this release for a year, that is, until 2023-06-01. 38 | 39 | This change won't bump the major version of gRPC since this doesn't introduce 40 | API changes. Hence, the next version requiring C++14 will be 1.47. 41 | -------------------------------------------------------------------------------- /L99-core-eliminate-corking.md: -------------------------------------------------------------------------------- 1 | L99: C-Core Eliminate Corking 2 | ---- 3 | * Author(s): ctiller 4 | * Approver: markdroth 5 | * Status: In Review 6 | * Implemented in: C Core 7 | * Last updated: 8/2/2022 8 | * Discussion at: https://groups.google.com/g/grpc-io/c/6GzDzAoiySk 9 | 10 | ## Abstract 11 | 12 | Remove the GRPC_INITIAL_METADATA_CORKED flag. 13 | 14 | ## Background 15 | 16 | This flag does nothing inside core. 17 | 18 | ## Proposal 19 | 20 | Remove the flag and all references in public API. 21 | 22 | ## Rationale 23 | 24 | This flag was originally introduced to support corking initial metadata, but that's now entirely handled in binding layers, so the flag is no longer useful. 25 | 26 | ## Implementation 27 | 28 | Remove the flag and references to it: https://github.com/grpc/grpc/pull/30443 29 | -------------------------------------------------------------------------------- /L9_graphics/bar_after.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L9_graphics/bar_after.png -------------------------------------------------------------------------------- /L9_graphics/bar_before.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grpc/proposal/18c49bdd0522ab7a0e8b3fb4fdd66f783623c92c/L9_graphics/bar_before.png -------------------------------------------------------------------------------- /P1-cloud-native.md: -------------------------------------------------------------------------------- 1 | Moving gRPC to the Cloud Native Computing Foundation (CNCF) and ASLv2 2 | ------------------------------------------------------- 3 | * Author(s): Varun Talwar, Chris Aniszczyk 4 | * Approver: a11r 5 | * Status: Draft 6 | * Implemented in: N/A 7 | * Last updated: 2017-02-01 8 | * Discussion at: https://groups.google.com/forum/#!msg/grpc-io/AWCJlR-MA9k/N-EKJtQPAwAJ 9 | 10 | ## Abstract 11 | 12 | * Move the gRPC project/community under the auspicies of the Cloud Native Computing Foundation (CNCF) 13 | * Move the gRPC license to Apache License v2.0 14 | * Host gRPC tracks/event at CloudNativeCon/KubeCon events in late March and early December 15 | 16 | ## Rationale 17 | 18 | The Cloud Native Computing Foundation (https://cncf.io) is the current neutral home of the Kubernetes project and four additional projects in the cloud native technology space ([fluentd](http://www.fluentd.org/), [linkerd](https://linkerd.io/), [prometheus](https://prometheus.io/), [opentracing](http://opentracing.io/)). The CNCF acts as a neutral home for gRPC and increases the willingness of developers from other companies and independent developers to collaborate, contribute, and become committers. There's a plethora of benefits available for projects under the CNCF which is discussed here (https://www.cncf.io/projects) but some that may be relevant to the gRPC community are: 19 | 20 | * Our world-class events team will create a track or custom conference for your project at our CloudNativeCon/KubeCon events around the world, bringing together developers and users. 2017 events will take place in at least North America, Europe, China, and Japan. 21 | * You get priority access to the $20 million CNCF Community Cluster (https://github.com/cncf/cluster), a 1,000 server deployment of state-of-the-art Intel servers housed at the Supernap Switch facility in Las Vegas. We encourage you to do large-scale integration testing before releases, as well as testing experimental approaches and the scalability impact of pull requests. 22 | * You will have control over a substantial annual budget (currently ~$20 K) to improve your project documentation. 23 | * We have travel funding available for your non-corporate-backed developers and to increase attendance of women and other underrepresented minorities. 24 | * We will connect you to our worldwide network of Cloud Native meetup groups (http://meetups.cncf.io) and ambassadors to raise awareness of your project. We will also help sponsor meetup groups dedicated to your project so food and beverages can be provided. 25 | * You will have access to full-time CNCF staff who are eager to assist your project in myriad ways and help make it successful. 26 | 27 | The important aspect is that your existing committers still control your project, and we just ask that you have a neutral, unbiased process for resolving conflicts and deciding on new committers and moving existing ones to emeritus status. 28 | 29 | For an explanation of why the CNCF prefers the Apache Public License 2.0, see this blog post: 30 | https://www.cncf.io/blog/2017/02/01/cncf-recommends-aslv2 31 | 32 | ## Proposal 33 | 34 | We propose to give gRPC and its existing projects (https://github.com/grpc and https://github.com/grpc-ecosystem) a new home at CNCF. The suggested process to make this happen can be done in four steps: 35 | 36 | * Have this gRFC proposal open for 10 days and serve as a notice and hub for any community questions around the move. 37 | * Move gRPC to the Apache License v2.0 on Feb 15th 38 | * File the official gRPC proposal as a CNCF project and have the TOC call for a vote 39 | * Assuming the vote is accepted, announce the move to the public and throw a party! 40 | 41 | This process should take 2-3 weeks to complete depending on community feedback and voting. 42 | 43 | ### Related Items: 44 | 45 | * https://github.com/cncf/toc/pull/23 46 | * https://www.cncf.io/blog/2017/02/01/cncf-recommends-aslv2 47 | * https://www.cncf.io/announcement/2016/03/10/cloud-native-computing-foundation-accepts-kubernetes-as-first-hosted-project-technical-oversight-committee-elected 48 | -------------------------------------------------------------------------------- /P3-grfcs-for-core-api-changes.md: -------------------------------------------------------------------------------- 1 | Require gRFCs for core API changes 2 | ---- 3 | * Author(s): vjpai 4 | * Approver: a11r 5 | * Status: Approved 6 | * Implemented in: n/a 7 | * Last updated: January 12, 2018 8 | * Discussion at: https://groups.google.com/forum/#!topic/grpc-io/gHSoRShRl9w 9 | 10 | ## Abstract 11 | 12 | To facilitate the stabilization of gRPC core and its use in 13 | wrapped-language APIs, any change, addition, or deletion of a core 14 | surface API (those declared in 15 | https://github.com/grpc/grpc/tree/master/include/grpc or its 16 | subdirectories) should be accompanied by a gRFC in the 17 | https://github.com/grpc/proposal repository. 18 | 19 | ## Background 20 | 21 | The gRPC Core library is not considered public API. However, it is 22 | directly used by numerous language bindings. Although we have allowed 23 | changes to the core surface API in the past without the gRFC process, 24 | this should not be a continued practice in the future as these 25 | destabilize the use of core. 26 | 27 | As with all gRPC components, gRPC Core follows the principles of 28 | [semantic versioning](https://semver.org/). Since breaking changes are 29 | allowed, gRPC Core is currently at version 5.0.0 while gRPC C++ and 30 | other wrappings are at version 1.9.0. 31 | 32 | ### Related Proposals: 33 | 34 | N/A 35 | 36 | ## Proposal 37 | 38 | Any addition, change, or deletion of a core surface API data type or 39 | function should not be implemented without following the gRFC 40 | process. There should be an exception for adding new APIs that are 41 | marked as experimental, but these must go through a gRFC if they are 42 | ever changed from experimental to non-experimental. 43 | 44 | ## Rationale 45 | 46 | One of the objectives of this work and of recent gRPC core efforts in 47 | general is to allow wrapped languages to move to their own 48 | repositories, bringing in core as a submodule. As a result, these will 49 | update their core linkage point less frequently and are susceptible to 50 | breakages at upgrades as core APIs change. 51 | 52 | Changes and deletions are considered API breakage that requires a bump 53 | in the major version number of core and these will immediately break 54 | other language-binding implementors. These language-binding 55 | implementors should have a chance to comment before their code is 56 | broken. 57 | 58 | The rationale in the case of additions is more subtle. Just as in the 59 | actual language binding APIs, an API addition should be considered a 60 | promise of stability. The library should not make promises of 61 | stability without careful consideration. There can be an exception for 62 | adding new APIs that are marked experimental, as these do not offer any 63 | implied promise of stability and the gRFC process may slow down 64 | experimentation with a new and useful feature. 65 | 66 | ## Implementation 67 | 68 | N/A 69 | 70 | ## Open issues (if applicable) 71 | 72 | N/A 73 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # gRPC RFCs 2 | ## Introduction 3 | Please read the gRPC organization's [governance rules](https://github.com/grpc/grpc-community/blob/master/governance.md) 4 | and [contribution guidelines](https://github.com/grpc/grpc-community/blob/master/CONTRIBUTING.md) before proceeding. 5 | 6 | This repo contains the design proposals for substantial feature changes for 7 | gRPC that need to be designed upfront. The goal of the upfront design process 8 | is to: 9 | - Provide increased visibility to the community on upcoming changes and the design considerations around them. 10 | - Provide ability to reason about larger “sets” of changes that are too big to be covered either in an Issue or in a PR. 11 | - Establish a consistent process for structured participation by the community on large changes, especially those that impact multiple runtimes and implementations. 12 | 13 | ## Prerequisites 14 | This process needs to be followed for any significant change to gRPC that 15 | needs design. 16 | Changes that are considered significant can be: 17 | - Features that need implementation across runtimes and languages. 18 | - Process changes that affect how the gRPC product is implemented. 19 | - Breaking changes to the public API (i.e. semver major changes). 20 | 21 | ## Process 22 | 23 | 1. Fork the repo and copy the template [GRFC-TEMPLATE.md](GRFC-TEMPLATE.md). 24 | 1. Rename it to ``$CategoryName-$Summary``, eg.: ``A6-client-retries.md`` (see 25 | category definitions below) 26 | - For language-specific proposals, include the name of the language: 27 | ``L##-$Language-$Summary``. Canonical names: `core`, `cpp`, `csharp`, `go`, 28 | `java`, `node`, `objc`, `php`, `python`, `ruby`. 29 | 1. Write up the RFC. 30 | 1. Submit a Pull Request. 31 | 1. Someone from gRPC team will be assigned as an APPROVER as part of this 32 | review. Once the APPROVER is assigned, the OWNER needs to start a discussion on 33 | [grpc-io](https://groups.google.com/forum/#!forum/grpc-io) and update the PR 34 | with the discussion link. After this is done, the OWNER should update the gRFC 35 | to the state of ``In Review``. It is expected that the APPROVER will help the 36 | OWNER along this process as needed. 37 | 1. For at least a period of 10 business days (the minimum comment period), 38 | it is expected that the OWNER will respond to the comments and make updates 39 | to the RFC as new commits to the PR. Through the process, the discussion 40 | needs to be kept to the designated thread in the mailing list in order to 41 | avoid splintering conversations. The OWNER is encouraged to solicit as much 42 | feedback on the proposal as possible during this period. 43 | PR comments should be limited to formatting and vocabulary. 44 | 1. If there is consensus as deemed by the APPROVER during the comment period, 45 | the APPROVER will mark the proposal as final and assign it a gRFC number. 46 | Once this is assigned (as part of the closure of discussion), the OWNER will 47 | update the state of the PR as final and submit the PR. 48 | Commits must not be squashed; the commit history serves as a log of changes 49 | made to the proposal. 50 | 51 | ## APPROVER 52 | - By default ``a11r`` is the approver unless another approver is assigned 53 | on a per-proposal basis. 54 | - If the assigned APPROVER and the OWNER cannot satisfactorily settle an issue, 55 | the final APPROVER is still ``a11r``. 56 | 57 | ## Proposal Categories 58 | The proposals shall be numbered in increasing order. 59 | 60 | - ``#An`` - Affects all languages. 61 | - ``#Pnn`` - Affects processes, such as the proposal process itself. 62 | - ``#Lnnn`` - Language specific changes to external APIs or platform support. 63 | - ``#Gnnnn`` - Protocol level changes. 64 | 65 | ## Proposal Status 66 | 1. Every uncommitted proposal candidate starts off in the ``Draft`` state. 67 | 1. After it accepted for review and posted to the group, it enters the 68 | ``In Review`` state. 69 | 1. Once it is approved for submission by the arbiter, it goes into the 70 | ``Final`` state. Only minor changes are allowed (what qualifies as minor is 71 | left to the APPROVER). 72 | 1. If a proposal needs to be revisited, it can be moved back to the ``Draft`` 73 | or ``In Review`` state. This can happen if issues are discovered during 74 | implementation. At which point, the review process as described above must be 75 | followed. 76 | 1. Once a proposal is ``Final`` and if it has been implemented by a language, 77 | it can be updated to a status of ``Implemented`` with the implementing 78 | languages listed. (Listing versions is not required.) 79 | --------------------------------------------------------------------------------