├── .gitignore ├── LICENSE ├── README.md ├── articles ├── 2013-12-12-python-meta-programming.markdown ├── 2013-12-13-dive-into-protocol-buffers-python-api.markdown └── 2013-12-15-implement-an-asynchronous-rpc-basing-on-protocol-buffers.markdown ├── example ├── __init__.py ├── echo_client.py ├── echo_service.proto ├── echo_service.py └── echo_service_pb2.py ├── google ├── __init__.py ├── __init__.pyc └── protobuf │ ├── __init__.py │ ├── compiler │ ├── __init__.py │ └── plugin_pb2.py │ ├── descriptor.py │ ├── descriptor_database.py │ ├── descriptor_pb2.py │ ├── descriptor_pool.py │ ├── internal │ ├── __init__.py │ ├── api_implementation.py │ ├── containers.py │ ├── cpp_message.py │ ├── decoder.py │ ├── encoder.py │ ├── enum_type_wrapper.py │ ├── message_listener.py │ ├── python_message.py │ ├── type_checkers.py │ └── wire_format.py │ ├── message.py │ ├── message_factory.py │ ├── reflection.py │ ├── service.py │ ├── service_reflection.py │ └── text_format.py ├── logger.py ├── rpc ├── __init__.py ├── rpc_channel.py ├── rpc_controller.py ├── tcp_client.py ├── tcp_connection.py └── tcp_server.py └── tests └── tcp_server_client_test.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | .idea/ 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2013 Meng Zhang 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so, 10 | subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 17 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 18 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 19 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 20 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 21 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | protobuf-RPC 2 | ============ 3 | 4 | A protobuf based RPC implementation with an echo service example 5 | 6 | ---- 7 | 8 | My three articles about protobuf python API and the example repo are as following: 9 | 10 | 1. [Python Meta Programming Used in Protobuf](./articles/2013-12-12-python-meta-programming.markdown) 11 | 12 | 2. [Dive Into Protocol Buffers Python API](./articles/2013-12-13-dive-into-protocol-buffers-python-api.markdown) 13 | 14 | 3. [Implement An Asynchronous RPC Basing on Protocol Buffers](./articles/2013-12-15-implement-an-asynchronous-rpc-basing-on-protocol-buffers.markdown) -------------------------------------------------------------------------------- /articles/2013-12-12-python-meta-programming.markdown: -------------------------------------------------------------------------------- 1 | tags: 2 | - Python 3 | comments: true 4 | date: 2013-12-12 23:47:23 +0800 5 | layout: post 6 | status: public 7 | title: 'Python Meta-programming' 8 | --- 9 | 10 | 在实际工作中,Python的Meta-programming的使用其实是比较少的。另一方面使用这个语言特性很容易导致代码可维护性下降,所以应该是尽可能的避免使用的。 11 | 12 | 但是对于一些特殊的代码,例如目前正在研究的*Google Protocol Buffers*的Python API,由于其需要根据用户定义的proto文件生成特定的class,因此需要对class很强的定制能力。而这正是Meta-programming所擅长的事情。 13 | 14 | Python的新式类可以通过通过两种方式在运行时修改类的定义,通过`__new__`方法,以及通过指定`__metaclass__`。protobuf主要使用的就是后者。本文参考《Expert Python Programming》中"Meta-programming"章节,对这两种方法分别进行说明。 15 | 16 | ## 使用`__new__`方法在实例化过程中修改类定义 ## 17 | 18 | `__new__`方法被称为`meta-constructor`,每个类在实例化过程中都会首先调用`__new__`方法得到对象实例,之后才会去调用可能存在的`__init__`方法。 19 | 20 | 一个简单的实验: 21 | ```python 22 | class MyClass(object): 23 | 24 | def __new__(cls): 25 | print '__new__ called' 26 | return object.__new__(cls) 27 | 28 | def __init__(self): 29 | print '__init__ called' 30 | 31 | instance = MyClass() 32 | ``` 33 | 34 | 运行这段脚本可以看到输出结果: 35 | 36 | ```bash 37 | __new__ called 38 | __init__ called 39 | ``` 40 | 41 | 因此通过这种方式,我们可以通过重载`__new__`方法来更改一个类的实例化行为。一个典型的应用是通过重载`__new__`方法来实现单例模式: 42 | 43 | ```python 44 | class Singleton(object): 45 | 46 | _instance = None 47 | 48 | def __new__(cls, *args, **kwargs): 49 | if not cls._instance: 50 | cls._instance = super(Singleton, cls).__new__(cls, *args, **kwargs) 51 | return cls._instance 52 | ``` 53 | 54 | 另外一些关键的初始化可以在`__new__`函数内进行。即使该类被继承,并且子类忘记调用基类的`__init__`方法,依然可以保证代码的正确运行,或者至少给出相应的提示。threading模块的Thread类就是用这种机制来避免未初始化的子类。 55 | 56 | 57 | ## 更加自由灵活的`__metaclass__` 58 | 59 | 元类(`Metaclass`),简单说来就是用来生成类的类。这里我们把`class`本身称作“类对象”,由“类对象”实例化得到的对象称作“实例对象”。在Python中,默认情况下所有的类对象都是`type`类的实例对象,即`type`类是所有`class`的`Metaclass`,甚至包括type类自身的`Metaclass`也是`type`类。 60 | 61 | ### Python的`type`关键字 62 | 63 | Python中的`type`关键字是Python语言里少数违反单一性原则的特例之一: 64 | 65 | 1. 当使用单参数调用`type(instance)`的时候,`type`是一个函数,其作用是返回`instance`(实例对象)的`class`类型。 66 | 2. 当使用三参数调用`type(classname, base_types, dict)`的时候,`type`是一个类,其作用是根据给定参数实例化出一个`class`(类对象)。 67 | 68 | 前面说到所有`class`都是`type`类的实例,可以很容易的根据`type`的这两种用法通过代码来验证一下: 69 | 70 | ```pycon 71 | >>> class A(object): 72 | ... pass 73 | ... 74 | >>> instance = A() 75 | >>> type(instance) 76 | 77 | >>> type(A) 78 | 79 | >>> type(type) 80 | 81 | ``` 82 | 83 | ### 使用自定义的`Metaclass`生成类对象 84 | 85 | 想要使用自定义的`Metaclass`生成类对象,首先弄清楚Python默认的`type`是如何作为`Metaclass`生成类对象的。其实根据前面`type`的第二种用法,你应该也已经猜到大概了,我们只需要给定所需要生成的`class`的名字、基类以及相应的属性和方法,就可以由`type`实例化出我们所要的类对象。也就是说,如下两段代码可以看做是等价的: 86 | 87 | ```python 88 | def method(self): 89 | return 0 90 | 91 | MyClass = type('MyClass', (object,), {'method': method}) 92 | 93 | instance = MyClass() 94 | ``` 95 | 96 | ```python 97 | class MyClass(object): 98 | def method(self): 99 | return 0 100 | 101 | instance = MyClass() 102 | ``` 103 | 104 | Python为指定类的自定义`Metaclass`提供了更加便捷的语法`__metaclass__`,其具体使用方式如下: 105 | 106 | ```python 107 | class MyClass(object): 108 | __metaclass__ = MyMetaClass 109 | 110 | def some_method: 111 | pass 112 | ``` 113 | 114 | 其中,自定义的`Metaclass`(`MyMetaClass`)只需要满足的如下两个条件(只要满足这些条件即可,自定义的`Metaclass`不必一定是一个类): 115 | 116 | 1. 可以接受与`type`相同的参数列表,也就是类名、基类的元组以及属性的字典。 117 | 2. 返回一个类对象。 118 | 119 | 借用《Expert Python Programming》上一个例子来简单说明`__metaclass__`的用法,这里为没有指定`docstring`的类在生成阶段指定一个默认的`docstring`: 120 | 121 | ```python 122 | def type_with_default_doc(classname, base_types, dict): 123 | if '__doc__' not in dict: 124 | dict['__doc__'] = 'default doc' 125 | return type(classname, base_types, dict) 126 | 127 | class MyClassWithDoc(object): 128 | __metaclass__ = type_with_default_doc 129 | ``` 130 | 131 | `__metaclass__`的用法其实就是这样,相对来说还是比较简单的。但是正如开篇提到的那样,在不是非常必要的时候,还是应该尽量避免使用这样的语言特性,因为使用过多`Metaclass`的项目必然是很难维护的。 132 | 133 | 而本文的目的,主要还是为剖析*Google Protocol Buffers*的Python API的实现做铺垫。其内部主要就是使用`__metaclass__`机制来生成由proto文件定义的结构体和service。这部分内容会在后续的博文中进行更详细的阐述:) -------------------------------------------------------------------------------- /articles/2013-12-13-dive-into-protocol-buffers-python-api.markdown: -------------------------------------------------------------------------------- 1 | tags: 2 | - Python 3 | - protobuf 4 | comments: true 5 | date: 2013-12-13 23:01:13 +0800 6 | layout: post 7 | status: public 8 | title: 'Dive into Protocol Buffers Python API' 9 | --- 10 | 11 | *Google Protocol Buffers*是Google使用的数据交换格式,在RPC协议和文件存储等有广泛的应用。其基本使用方法就不在赘述,可以参看*protobuf*的项目主页:。本文的主要内容是剖析*protobuf*的Python API的具体实现。 12 | 13 | 由于我们需要的不仅仅是单纯的`message`结构,后续还希望使用*protobuf*的`service`实现RPC机制,因此本文会对这两部分内容进行分析。同时,为了使得剖析过程尽可能清晰,使用最简单的`message`和`service`结构作为研究对象,但是思路理清楚之后,更复杂的结构分析起来也是大同小异的。本文的以如下的proto文件及其编译出的代码作为剖析的起点: 14 | 15 | ```protobuf 16 | package sample; 17 | 18 | option py_generic_services = true; 19 | 20 | message Void {} 21 | 22 | message SampleMessage { 23 | required string message = 1; 24 | } 25 | 26 | service SampleService { 27 | rpc echo(SampleMessage) returns(Void); 28 | } 29 | 30 | ``` 31 | 32 | 使用`protoc`进行编译,即可得到对应的Python模块sample_pb2.py: 33 | 34 | ```bash 35 | $ protoc --python_out=. sample.proto 36 | ``` 37 | 38 | 生成的py代码超过100行,为了方便剖析,接下来按照结构分块进行剖析。 39 | 40 | ## message 41 | 42 | 对`message`的剖析,使用`message SampleMessage`的生成代码。 43 | 44 | `message SampleMessage`对应的Python class的定义非常简单: 45 | 46 | ```python 47 | class SampleMessage(_message.Message): 48 | __metaclass__ = _reflection.GeneratedProtocolMessageType 49 | DESCRIPTOR = _SAMPLEMESSAGE 50 | ``` 51 | 52 | 这里涉及到的`__metaclass__`可以参看上一篇博文[《Python Meta-programming》](/blog/2013/12/12/python-meta-programming/)。 53 | 54 | 这里的`_SAMPLEMESSAGE`的具体定义是: 55 | 56 | ```python 57 | _SAMPLEMESSAGE = _descriptor.Descriptor( 58 | name='SampleMessage', 59 | full_name='sample.SampleMessage', 60 | filename=None, 61 | file=DESCRIPTOR, 62 | containing_type=None, 63 | fields=[ 64 | _descriptor.FieldDescriptor( 65 | name='message', full_name='sample.SampleMessage.message', index=0, 66 | number=1, type=9, cpp_type=9, label=2, 67 | has_default_value=False, default_value=unicode("", "utf-8"), 68 | message_type=None, enum_type=None, containing_type=None, 69 | is_extension=False, extension_scope=None, 70 | options=None), 71 | ], 72 | extensions=[ 73 | ], 74 | nested_types=[], 75 | enum_types=[ 76 | ], 77 | options=None, 78 | is_extendable=False, 79 | extension_ranges=[], 80 | serialized_start=32, 81 | serialized_end=64, 82 | ) 83 | ``` 84 | 85 | 看起来我们在proto文件中所定义的信息基本都在这里了,事实上如果查看`Descriptor`的代码,这个结构的琐碎细节也主要是用来组织数据而已。而动态生成相应class的机制,应该主要是由`GeneratedProtocolMessageType`实现的,就让我们来看一下其源码: 86 | 87 | ```python 88 | class GeneratedProtocolMessageType(type): 89 | 90 | _DESCRIPTOR_KEY = 'DESCRIPTOR' 91 | 92 | def __new__(cls, name, bases, dictionary): 93 | descriptor = dictionary[GeneratedProtocolMessageType._DESCRIPTOR_KEY] 94 | bases = _NewMessage(bases, descriptor, dictionary) 95 | superclass = super(GeneratedProtocolMessageType, cls) 96 | 97 | new_class = superclass.__new__(cls, name, bases, dictionary) 98 | setattr(descriptor, '_concrete_class', new_class) 99 | return new_class 100 | 101 | def __init__(cls, name, bases, dictionary): 102 | descriptor = dictionary[GeneratedProtocolMessageType._DESCRIPTOR_KEY] 103 | _InitMessage(descriptor, cls) 104 | superclass = super(GeneratedProtocolMessageType, cls) 105 | superclass.__init__(name, bases, dictionary) 106 | ``` 107 | 108 | 看到之前`__metaclass__`,其实我们就已经可以知道其是利用Python的Meta-programming机制来动态生成类的了。而上面这段`GeneratedProtocolMessageType`正是继承了`type`类,因此也是一个元类。 109 | 110 | 这里需要解释一下我们在使用`class`语法定义一个类的时候,传给Metaclass的三个参数的赋值内容。在这里我们可以简单的做一个实验,用如下方式定义一个类及其元类,并生成一个实例对象: 111 | 112 | ```python 113 | import pprint 114 | 115 | class MetaType(type): 116 | 117 | def __new__(cls, name, bases, dictionary): 118 | 119 | print 'name: ' + pprint.pformat(name) 120 | print 'bases' + pprint.pformat(bases) 121 | print 'dictionary' + pprint.pformat(dictionary) 122 | 123 | superclass = super(MetaType, cls) 124 | new_class = superclass.__new__(cls, name, bases, dictionary) 125 | return new_class 126 | 127 | def __init__(cls, name, bases, dictionary): 128 | superclass = super(MetaType, cls) 129 | superclass.__init__(name, bases, dictionary) 130 | 131 | class A(object): 132 | __metaclass__ = MetaType 133 | 134 | CLASS_PROPERTY = 'CLASS_PROPERTY' 135 | 136 | def method(self): 137 | pass 138 | 139 | instance = A() 140 | 141 | ``` 142 | 143 | 运行该脚本得到如下输出结果: 144 | 145 | name: 'A' 146 | bases(,) 147 | dictionary{'CLASS_PROPERTY': 'CLASS_PROPERTY', 148 | '__metaclass__': , 149 | '__module__': '__main__', 150 | 'method': } 151 | 152 | 到这里,实例化`message`所对应的对象实例的过程就已经很清楚了: 153 | 154 | 首先protoc编译proto文件,生成对应与`message`的`Descriptor`以及一个简单的`class`骨架,这个`class`的主要作用就是通过类属性把对应`Descriptor`传给`GeneratedProtocolMessageType`。 155 | 156 | 而Python解释器真正要生成`message`所对应的`class`的时候,`GeneratedProtocolMessageType`会读取`Descriptor`中的属性和域的信息,动态的在生成的类实例中通过`InitMessage`(其最终是通过调用`setattr`)插入相应的属性和方法。 157 | 158 | ## service 159 | 160 | 相对于`message`来说,service的组成结构就更复杂一些,项目文档里对`service`也不够详细。概括来说,`service`主要是根据proto文件中的接口定义生成一个RPC调用的抽象层。这个抽象层是被设计成独立于任何RPC实现的,也就是说protobuf的作用只是帮助你在不同语言之间生成统一的调用接口,你可以在这个接口之下使用任何的通信机制来实现RPC过程。 161 | 162 | 虽然听起来很美,但是这样的抽象层也带来了过多不必要的间接层,在*protobuf 2.3*版本之后已经不鼓励继续使用`service`来实现RPC。但是一方面由于要取代`service`的`plugins`机制依然还在试验阶段,另一方面目前现有的很多的RPC实现依然是基于`service`,因此本文还是以`service`为研究对象来剖析如何利用`protobuf`来实现RPC机制。 163 | 164 | 利用*protobuf*的`service`来实现RPC,主要涉及三个对象: 165 | 166 | 1. `Service`: 提供了RPC可调用的方法的抽象层接口,由具体的服务或stub继承这个抽象接口,并进行具体实现。 167 | 168 | 2. `RpcChannel`:其负责与一个`Service`进行通信并调用其提供的RPC方法,通常情况下会在调用端实现一个`stub`对`RpcChannel`进行封装,通过调用`stub`的函数接口将调用行为转换为数据流通过`RpcChannel`进行传输,而不会直接使用`RpcChannel`。 169 | 170 | 3. `RpcController`:主要作用是提供一种可以控制RPC调用过程或者查明RPC过程中发生的错误的方式。 171 | 172 | 在这里我们依然结合之前的实例来对`service`进行剖析。同时,还会通过简单实现一个*Echo Service*的RPC调用来说明`service`的三个抽象对象是如何协作的。 173 | 174 | 同样的,我们从前面的proto文件编译出的py代码开始进行分析。其中对应`service`接口的两个抽象类的定义如下: 175 | 176 | ```python 177 | class SampleService(_service.Service): 178 | __metaclass__ = service_reflection.GeneratedServiceType 179 | DESCRIPTOR = _SAMPLESERVICE 180 | 181 | class SampleService_Stub(SampleService): 182 | __metaclass__ = service_reflection.GeneratedServiceStubType 183 | DESCRIPTOR = _SAMPLESERVICE 184 | ``` 185 | 186 | `SampleService`是为服务的**被调用端**提供的抽象接口,被调用段通过继承该接口并实现相应方法为调用端提供服务。 187 | 188 | `SampleService_Stub`则是为**调用端**提供的`stub`的抽象接口。调用端需要做的事情则是继承该接口,将RPC函数接口的调用转化为数据流,并通过通信管道传递到被调用一端。 189 | 190 | 和`message`一样,这两个类只是一个骨架,其真正的实现是通过`__metaclass__`以及`Descriptor`进行实现。 191 | 192 | 我们首先来看一下`service`的`Descriptor`是什么样子的: 193 | 194 | ```python 195 | _SAMPLESERVICE = _descriptor.ServiceDescriptor( 196 | name='SampleService', 197 | full_name='sample.SampleService', 198 | file=DESCRIPTOR, 199 | index=0, 200 | options=None, 201 | serialized_start=66, 202 | serialized_end=126, 203 | methods=[ 204 | _descriptor.MethodDescriptor( 205 | name='echo', 206 | full_name='sample.SampleService.echo', 207 | index=0, 208 | containing_service=None, 209 | input_type=_SAMPLEMESSAGE, 210 | output_type=_VOID, 211 | options=None, 212 | ), 213 | ]) 214 | ``` 215 | 216 | 这个`Descriptor`依然包含了许多属性,但是我们其实更多的只需要关注`methods`这个属性,它是一个`list`,包含了我们在`service`中的定义的所有方法。之所以要关注`methods`,是因为在后续做RPC底层通信的具体实现的时候,主要需要传递的数据就是我们所调用的RPC方法及相应参数的描述。 217 | 218 | 接下来我们看一下被调用端的`Service`的元类**`GeneratedServiceType`**: 219 | 220 | ```python 221 | class GeneratedServiceType(type): 222 | 223 | _DESCRIPTOR_KEY = 'DESCRIPTOR' 224 | 225 | def __init__(cls, name, bases, dictionary): 226 | if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary: 227 | return 228 | descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY] 229 | service_builder = _ServiceBuilder(descriptor) 230 | service_builder.BuildService(cls) 231 | ``` 232 | 233 | 这一层的定义依然非常简单,具体的细节我们需要进一步向前追溯到`_ServiceBuilder.BuildService`的代码才能一探究竟: 234 | 235 | ```python 236 | 237 | class _ServiceBuilder(object): 238 | 239 | def __init__(self, service_descriptor): 240 | self.descriptor = service_descriptor 241 | 242 | def BuildService(self, cls): 243 | def _WrapCallMethod(srvc, method_descriptor, 244 | rpc_controller, request, callback): 245 | return self._CallMethod(srvc, method_descriptor, 246 | rpc_controller, request, callback) 247 | self.cls = cls 248 | cls.CallMethod = _WrapCallMethod 249 | cls.GetDescriptor = staticmethod(lambda: self.descriptor) 250 | cls.GetDescriptor.__doc__ = "Returns the service descriptor." 251 | cls.GetRequestClass = self._GetRequestClass 252 | cls.GetResponseClass = self._GetResponseClass 253 | for method in self.descriptor.methods: 254 | setattr(cls, method.name, self._GenerateNonImplementedMethod(method)) 255 | 256 | def _CallMethod(self, srvc, method_descriptor, 257 | rpc_controller, request, callback): 258 | if method_descriptor.containing_service != self.descriptor: 259 | raise RuntimeError( 260 | 'CallMethod() given method descriptor for wrong service type.') 261 | method = getattr(srvc, method_descriptor.name) 262 | return method(rpc_controller, request, callback) 263 | 264 | def _GetRequestClass(self, method_descriptor): 265 | if method_descriptor.containing_service != self.descriptor: 266 | raise RuntimeError( 267 | 'GetRequestClass() given method descriptor for wrong service type.') 268 | return method_descriptor.input_type._concrete_class 269 | 270 | def _GetResponseClass(self, method_descriptor): 271 | if method_descriptor.containing_service != self.descriptor: 272 | raise RuntimeError( 273 | 'GetResponseClass() given method descriptor for wrong service type.') 274 | return method_descriptor.output_type._concrete_class 275 | 276 | def _GenerateNonImplementedMethod(self, method): 277 | return lambda inst, rpc_controller, request, callback: ( 278 | self._NonImplementedMethod(method.name, rpc_controller, callback)) 279 | 280 | def _NonImplementedMethod(self, method_name, rpc_controller, callback): 281 | rpc_controller.SetFailed('Method %s not implemented.' % method_name) 282 | callback(None) 283 | ``` 284 | 285 | `BuildService`函数主要做了两件事情: 286 | 287 | 1. 将自身的`_CallMethod`、`_GetRequestClass`、`_GetResponseClass`等公用方法的引用赋给新生成的类。其内部有一个`_WrapCallMethod`的嵌套函数,该嵌套函数存在的目的只是为了在使用`Service`实例对象进行方法调用`CallMethod`的时候可以把自身作为`srvc`参数传递给`_CallMethod`方法。 288 | 289 | 2. 将我们在proto中定义的`service`的RPC调用接口通过`setattr`“注入”到类的定义中。 290 | 291 | 这里尤其需要注意的是`_CallMethod`方法,可以看到这个方法主要的作用是讲传入的`method_descriptor`转化解析成为对`srvc`中相应方法的调用。因此,只要我们可以通过反序列化从通信的数据流中解析出RPC调用的`MethodDescriptor`,即可直接利用`_CallMethod`方法调用到相应的服务。这一点正是被调用端抽象接口需要实现的关键部分。 292 | 293 | 而调用端的**`GeneratedServiceStubType`**结构也是类似的: 294 | 295 | ```python 296 | class GeneratedServiceStubType(GeneratedServiceType): 297 | 298 | _DESCRIPTOR_KEY = 'DESCRIPTOR' 299 | 300 | def __init__(cls, name, bases, dictionary): 301 | super(GeneratedServiceStubType, cls).__init__(name, bases, dictionary) 302 | 303 | if GeneratedServiceStubType._DESCRIPTOR_KEY not in dictionary: 304 | return 305 | descriptor = dictionary[GeneratedServiceStubType._DESCRIPTOR_KEY] 306 | service_stub_builder = _ServiceStubBuilder(descriptor) 307 | service_stub_builder.BuildServiceStub(cls) 308 | ``` 309 | 310 | `GeneratedServiceStubType`不仅包含了`GeneratedServiceType`对类对象的全部定义,还在此基础上通过`_ServiceStubBuilder`增加了`stub`所特有的属性。`_ServiceStubBuilder`的实现如下: 311 | 312 | ```python 313 | 314 | class _ServiceStubBuilder(object): 315 | 316 | def __init__(self, service_descriptor): 317 | self.descriptor = service_descriptor 318 | 319 | def BuildServiceStub(self, cls): 320 | def _ServiceStubInit(stub, rpc_channel): 321 | stub.rpc_channel = rpc_channel 322 | self.cls = cls 323 | cls.__init__ = _ServiceStubInit 324 | for method in self.descriptor.methods: 325 | setattr(cls, method.name, self._GenerateStubMethod(method)) 326 | 327 | def _GenerateStubMethod(self, method): 328 | return (lambda inst, rpc_controller, request, callback=None: 329 | self._StubMethod(inst, method, rpc_controller, request, callback)) 330 | 331 | def _StubMethod(self, stub, method_descriptor, 332 | rpc_controller, request, callback): 333 | return stub.rpc_channel.CallMethod( 334 | method_descriptor, rpc_controller, request, 335 | method_descriptor.output_type._concrete_class, callback) 336 | 337 | ``` 338 | 339 | 其主要作用是实现对`RpcChannel`的包裹,从而将远端的RPC调用伪装成一个本地调用。这段代码里比较关键的两步: 340 | 341 | 1. `_GenerateStubMethod`生成的包裹方法将所有对`stub`方法的调用统一转换为对`_StubMethod`方法的调用,同时还将对具体方法的调用转化为了传入一个`MethodDescriptor`,使得后续进行通信的时候可以将调用行为序列化。 342 | 2. `_StubMethod`方法则进一步将方法的调用传递给了`RpcChannel.CallMethod`,从而可以通过`RpcChannel`将调用行为通过通信管道传递出去。也就是说,调用端抽象接口实现主要需要关注的是`RpcChannel.CallMethod`如何处理调用行为以及参数的序列化以及数据的传递。 343 | 344 | 既然如此,我们就来继续看一下`RpcChannel`的定义: 345 | 346 | ```python 347 | class RpcChannel(object): 348 | def CallMethod(self, method_descriptor, rpc_controller, 349 | request, response_class, done): 350 | raise NotImplementedError 351 | ``` 352 | 353 | RpcChannel接口非常简单明了,就是一个有待我们实现的`CallMethod`方法。回想一下`GeneratedServiceType`为我们的`Service`会添加一个非常相似的`CallMethod`方法。区别只在于被调用端的`CallMethod`会直接通过返回值传回`response`,而这里通过函数参数指定`response`的类型。所以,只要讲调用端的`CallMethod`和被调用端的`CallMethod`通过通信管道链接在一起,即可完成一个RPC过程! 354 | 355 | 目前为止,我们还遗漏了一项:`RpcController`。这个类主要是为了我们可以捕获RPC调用过程中的一些异常情况,并提供了一些额外的控制。具体实现方式因人而异,其定义也非常简单,仅仅是提供了一些基本的函数接口。这里不再赘述,具体要实现的内容可参看*protobuf Python API* `service.py`文件中`RpcController`代码的注释。 356 | 357 | 到目前为止,我们掌握的信息已经足以去利用`protobuf`具体实现一套PRC机制。在下一篇博文《Implement an Asynchronous RPC Basing on Protocol Buffers》中将基于本文的内容,具体说明如何构建一个可供RPC调用的`Echo Service`:) -------------------------------------------------------------------------------- /articles/2013-12-15-implement-an-asynchronous-rpc-basing-on-protocol-buffers.markdown: -------------------------------------------------------------------------------- 1 | tags: 2 | - protobuf 3 | - RPC 4 | - Network 5 | comments: true 6 | date: 2013-12-15 11:12:41 +0800 7 | layout: post 8 | status: public 9 | title: 'Implement an Asynchronous RPC Basing on Protocol Buffers' 10 | --- 11 | 12 | 在前一篇博文《Dive Into Protocol Buffers Python API》中对*protobuf*的Python API的代码进行了分析。现在进入实践阶段,利用*protobuf*的`service` API实现一套异步RPC机制。 13 | 14 | 严谨起见,从*wikipedia*上摘录下一般情况下一次RPC调用的过程: 15 | 16 | >1. The client calls the client stub. The call is a **local procedure call**, with parameters pushed on to the stack in the normal way. 17 | >2. The client stub packs the parameters into a message and makes a system call to send the message. Packing the parameters is called **marshalling**. 18 | >3. The client's local operating system **sends** the message from the client machine to the server machine. 19 | >4. The local operating system on the server machine passes the **incoming** packets to the server stub. 20 | >5. The server stub unpacks the parameters from the message. Unpacking the parameters is called **unmarshalling**. 21 | >6. Finally, the server stub calls **the server procedure**. The reply traces the same steps in the reverse direction. 22 | 23 | 上面过程中的第1和第6步已经由*protobuf*的`service` API为我们实现好了,我们只需要在proto文件中定义所需的具体调用接口即可。 24 | 25 | 对于第2和第5步的*marshalling*和*unmarshalling*步骤,`service` API虽然没有为我们完全实现,但是*protobuf*为方法以及参数已经准备好了完善的*serialization*的机制,我们只需要自己决定如何用这些序列化的数据拼装数据包即可。 26 | 27 | 最后,第3和第4步的通信机制则是完全需要由我们自己来实现的,这也是*protobuf*设计的初衷,在最多变的部分(多种多样的网络结构、协议和通信机制)留出足够的空间让程序员可以针对特定场景自己实现,使得*protobuf*可以应用在更多的场景。 28 | 29 | 回到标题所说的*Asynchronous RPC*。一次函数调用通常包含了输入和输出两个过程。对于RPC来说,我们可以像大多数本地函数那样,在进行调用之后一直等待,直到计算结果返回才继续向下执行。但是由于网络传输的过程相对比较耗时,采取这样的策略无疑是非常低效的。因此我们采取另外一种策略:调用者发送调用请求之后不等待结果的返回就立即继续执行后续的操作,当收到RPC返回的计算结果之后再回来处理。这里将前一种策略称为*Synchronous RPC*,而后一种就是本文要实现的*Asynchronous RPC*。 30 | 31 | 实现的方式其实也很简单,就是把客户端发起的一次RPC调用拆分成两次来处理:首先由客户端发起RPC调用,之后无需等待继续向后执行;而服务端接收到RPC调用请求并处理完成之后,再向客户端发起另外一次RPC调用,将计算结果通过参数通知客户端。 32 | 33 | 关于RPC需要说明的东西大概就到这里,接下来我们首先解决第3和第4步的通信机制的实现。 34 | 35 | ## 实现通信层 36 | 37 | 我们选择使用asyncore和TCP协议实现RPC的通信层。关于asyncore的具体用法可以参考[asyncore的文档](http://docs.python.org/2/library/asyncore.html)。 38 | 39 | 首先将端到端的链接和传输抽象出来,一个端到端的通信可以用下面这样一个`TcpConnection`来进行封装: 40 | 41 | ```python 42 | class TcpConnection(asyncore.dispatcher): 43 | 44 | ST_INIT = 0 45 | ST_ESTABLISHED = 1 46 | ST_DISCONNECTED = 2 47 | 48 | def __init__(self, sock): 49 | asyncore.dispatcher.__init__(self, sock) 50 | self.peername = peername 51 | self.writebuff = '' 52 | self.status = TcpConnection.ST_ESTABLISHED if sock else TcpConnection.ST_INIT 53 | 54 | def handle_read(self): 55 | data = self.recv(4096) 56 | # process data here 57 | 58 | def handle_write(self): 59 | if self.writebuff: 60 | size = self.send(self.writebuff) 61 | self.writebuff = self.writebuff[size:] 62 | 63 | def writable(self): 64 | if self.status == TcpConnection.ST_ESTABLISHED: 65 | return len(self.writebuff) > 0 66 | else: 67 | return True 68 | 69 | def send_data(self, data): 70 | self.writebuff += data 71 | ``` 72 | 73 | 客户端负责主动向服务端发起连接请求,在请求成功后维护自己到服务端的**一条**连接。因此我们可以通过继承`TcpConnection`并增加`connect`行为得到通信的客户端: 74 | 75 | ```python 76 | class TcpClient(TcpConnection): 77 | 78 | def __init__(self, ip, port): 79 | TcpConnection.__init__(self, None) 80 | self.ip = ip 81 | self.port = port 82 | 83 | def async_connect(self): 84 | self.create_socket(socket.AF_INET, socket.SOCK_STREAM) 85 | self.connect(self.peername) 86 | 87 | def handle_connect(self): 88 | self.status = TcpConnection.ST_ESTABLISHED 89 | ``` 90 | 91 | 92 | 服务端则负责监听并接受客户端的连接请求,并为每一个客户维护一条连接: 93 | 94 | ```python 95 | class TcpServer(asyncore.dispatcher): 96 | 97 | def __init__(self, ip, port): 98 | asyncore.dispatcher.__init__(self) 99 | self.ip = ip 100 | self.port = port 101 | 102 | self.create_socket(socket.AF_INET, socket.SOCK_STREAM) 103 | self.set_reuse_addr() 104 | self.bind((self.ip, self.port)) 105 | self.listen(10) 106 | 107 | def handle_accept(self): 108 | try: 109 | sock, addr = self.accept() 110 | except socket.error, e: 111 | self.logger.warning('accept error: ' + e.message) 112 | return 113 | except TypeError, e: 114 | self.logger.warning('accept error: ' + e.message) 115 | return 116 | 117 | conn = TcpConnection(sock, addr) 118 | self.handle_new_connection(conn) 119 | 120 | def handle_new_connection(self, conn): 121 | """ handle new connection here """ 122 | pass 123 | ``` 124 | 125 | 至此,我们就完成了一个简陋但有效的C/S模式的通信层。 126 | 127 | ## 实现Echo服务 128 | 129 | 有了通信层,我们就可以继续向下进行。既然是RPC,那么就不可能脱离具体的业务,因此这里以经典的*Echo*服务为例,利用*protobuf*实现RPC。 130 | 131 | 我们为Echo定义proto如下: 132 | 133 | ```protobuf 134 | package nightfade; 135 | 136 | option py_generic_services = true; 137 | 138 | message Void {} 139 | 140 | message EchoString { 141 | required string message = 1; 142 | } 143 | 144 | service IEchoService { 145 | rpc echo(EchoString) returns(Void); 146 | } 147 | 148 | service IEchoClient { 149 | rpc respond(EchoString) returns(Void); 150 | } 151 | 152 | ``` 153 | 154 | 如前所述,因为要实现的是*Asynchronous RPC*,所以RPC调用分为两部分: 155 | 156 | 客户端首先调用`echo`,之后服务端接收到RPC请求并处理之后再调用`respond`将结果通知客户端。 157 | 158 | 使用*protoc*编译proto文件以及对生成的文件的分析这里就不在赘述,可以参考《Dive Into Protocol Buffers Python API》。这里需要关注的问题有两个: 159 | 160 | 1. 如何实现*Service*。 161 | 2. 如何将实现好的*Service*与我们的通信层关联起来。 162 | 163 | 因为Echo服务本身非常简单,所以第一个问题可以轻易解决: 164 | 165 | ```python 166 | class EchoService(IEchoService): 167 | def echo(self, rpc_controller, echo_string, callback=None): 168 | client_stub = IEchoClient_Stub(rpc_controller.rpc_channel) 169 | client_stub.respond(rpc_controller, echo_string, callback=None) 170 | ``` 171 | 172 | 接下来我们需要考虑的就是与通信层的关联问题。 173 | 174 | 要将*protobuf*的`service`与通信层关联的关键在于`RpcChannel`。 175 | 176 | 首先来看调用端这一边。 177 | 178 | 调用端通过*stub*对RPC过程的调用最终会转向对`RpcChannel.CallMethod()`的调用,而这个方法也正是*protobuf*留给我们实现调用端进行**marshalling**和数据发送的地方。这样一来问题就很容易解决了,我们为RpcChannel实现`CallMethod`方法: 179 | 180 | 1. 无论是调用端还是被调用端,一个`method_descriptor`在其所在*Service*内的*index*是一致的。因此*method_descriptor*的部分只需要对其*index*进行*marshalling*即可。 181 | 2. RPC调用的参数可以直接使用*protobuf*的`SerializeToString()`方法进行*marshalling*,进而在接收端通过`ParseFromString()`方法*unmarshalling*。 182 | 3. 数据包的*Framing*问题,则使用一个简单的方案:在数据包之前发送一个32位整数的*HEAD*用来告知接收端后续数据包的大小。 183 | 184 | 具体实现来看代码: 185 | 186 | ```python 187 | class RpcChannel(service.RpcChannel): 188 | 189 | HEAD_FMT = '!I' 190 | INDEX_FMT = '!H' 191 | HEAD_LEN = struct.calcsize(HEAD_FMT) 192 | INDEX_LEN = struct.calcsize(INDEX_FMT) 193 | 194 | def __init__(self, conn): 195 | super(RpcChannel, self).__init__() 196 | self.conn = conn 197 | 198 | def CallMethod(self, 199 | method_descriptor, 200 | rpc_controller, 201 | request, 202 | response_class, 203 | done): 204 | index = method_descriptor.index 205 | data = request.SerializeToString() 206 | size = RpcChannel.INDEX_LEN + len(data) 207 | 208 | self.conn.send_data(struct.pack(RpcChannel.HEAD_FMT, size)) 209 | self.conn.send_data(struct.pack(RpcChannel.INDEX_FMT, index)) 210 | self.conn.send_data(data) 211 | ``` 212 | 213 | 214 | 接下来实现被调用端。 215 | 216 | *protobuf*的`service` API在被调用端为我们完成的工作是,当使用合适的`method_descriptor`和`request`参数调用`IEchoService.CallMethod()`时,会自动调用我们对相应方法接口的具体实现。因此在服务端需要做的工作主要由: 217 | 218 | 1. 接受调用端发来的数据。 219 | 2. 对接收到的数据包进行*unmashalling*,解析得到`method_descriptor`和`request`参数。 220 | 3. 调用`EchoService.CallMethod()`。 221 | 222 | 我们实现的`TcpConnection`可以完成接受数据的工作,只是还没能与后续的步骤关联起来。既然*marshalling*的工作是由`RpcChannel`来完成的,*unmarshalling*的功能我们也同样在`RpcChannel`中实现,为其增加`receive`方法。当`TcpConnection`接受到数据之后,就交给`RpcChannel.receive`进行处理。 223 | 224 | ```python 225 | def receive(self, data): 226 | try: 227 | rpc_calls = self.rpc_parser.feed(data) 228 | except (AttributeError, IndexError), e: 229 | self.close() 230 | return 231 | 232 | for method_descriptor, request in rpc_calls: 233 | self.service_local.CallMethod(method_descriptor, self.rpc_controller, request, callback=None) 234 | ``` 235 | 236 | 其中`rpc_parser`负责将数据流*unmarshalling*成一系列的`method_descriptor`和`request`参数,具体实现就不再贴代码了。`service_local`则是服务端提供的服务`EchoService`。 237 | 238 | 至此,我们的整个RPC调用的的基本实现就已经完成了!限于篇幅,所以只贴了一些代码片段,完整的代码可以查看我的repository:。 239 | 240 | ## 其他 241 | 242 | 在这个RPC的实现中,其实还欠缺了一个重要部分`RpcController`。这个部分是干什么用的呢?依然引用*wikipedia*的一段说明: 243 | 244 | >An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked. Idempotent procedures (those that have no additional effects if called more than once) are easily handled, but enough difficulties remain that code to call remote procedures is often confined to carefully written low-level subsystems. 245 | 246 | 简单来说,RPC过程总是可能由于网络问题等不可预测的原因出错的,我们需要有一种途径来捕获并处理RPC过程中所发生的错误。`RpcController`就是为此而存在的,它定义了一些常用的错误处理的抽象接口,可以根据具体的场景进行实现。 247 | 248 | 鉴于`RpcController`的定义非常简单明确,并且是和具体场景紧密关联的,这里就不在上面花费更多精力了。以后业务逻辑逐渐复杂的时候,再根据需要case by case的进行实现即可。 -------------------------------------------------------------------------------- /example/__init__.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | -------------------------------------------------------------------------------- /example/echo_client.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | 3 | from example.echo_service_pb2 import IEchoClient 4 | import logger 5 | 6 | 7 | class EchoClient(IEchoClient): 8 | 9 | def __init__(self): 10 | self.streamout = None 11 | 12 | def set_streamout(self, streamout): 13 | self.streamout = streamout 14 | 15 | def respond(self, rpc_controller, echo_string, callback): 16 | """ called by RpcChannel.receive when a complete request reached. 17 | """ 18 | logger.get_logger('EchoClient').debug('EchoClient.respond') 19 | if self.streamout: 20 | self.streamout.write(echo_string.message) 21 | 22 | if callback: 23 | callback() 24 | -------------------------------------------------------------------------------- /example/echo_service.proto: -------------------------------------------------------------------------------- 1 | package nightfade; 2 | 3 | option cc_generic_services = true; 4 | option py_generic_services = true; 5 | 6 | message Void {} 7 | 8 | message EchoString { 9 | required string message = 1; 10 | } 11 | 12 | 13 | service IEchoService { 14 | rpc echo(EchoString) returns(Void); 15 | } 16 | 17 | 18 | service IEchoClient { 19 | rpc respond(EchoString) returns(Void); 20 | } 21 | 22 | -------------------------------------------------------------------------------- /example/echo_service.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | 3 | from example.echo_service_pb2 import IEchoService, IEchoClient_Stub 4 | import logger 5 | 6 | 7 | class EchoService(IEchoService): 8 | 9 | def echo(self, rpc_controller, echo_string, callback): 10 | """ called by RpcChannel.receive when a complete request reached. 11 | """ 12 | logger.get_logger('EchoService').info('echo service is called') 13 | echo_string.message = echo_string.message 14 | client_stub = IEchoClient_Stub(rpc_controller.rpc_channel) 15 | client_stub.respond(rpc_controller, echo_string, callback=None) 16 | if callback: 17 | callback() 18 | -------------------------------------------------------------------------------- /example/echo_service_pb2.py: -------------------------------------------------------------------------------- 1 | # Generated by the protocol buffer compiler. DO NOT EDIT! 2 | # source: echo_service.proto 3 | 4 | from google.protobuf import descriptor as _descriptor 5 | from google.protobuf import message as _message 6 | from google.protobuf import reflection as _reflection 7 | from google.protobuf import service as _service 8 | from google.protobuf import service_reflection 9 | from google.protobuf import descriptor_pb2 10 | # @@protoc_insertion_point(imports) 11 | 12 | 13 | 14 | 15 | DESCRIPTOR = _descriptor.FileDescriptor( 16 | name='echo_service.proto', 17 | package='nightfade', 18 | serialized_pb='\n\x12\x65\x63ho_service.proto\x12\tnightfade\"\x06\n\x04Void\"\x1d\n\nEchoString\x12\x0f\n\x07message\x18\x01 \x02(\t2>\n\x0cIEchoService\x12.\n\x04\x65\x63ho\x12\x15.nightfade.EchoString\x1a\x0f.nightfade.Void2@\n\x0bIEchoClient\x12\x31\n\x07respond\x12\x15.nightfade.EchoString\x1a\x0f.nightfade.VoidB\x06\x80\x01\x01\x90\x01\x01') 19 | 20 | 21 | 22 | 23 | _VOID = _descriptor.Descriptor( 24 | name='Void', 25 | full_name='nightfade.Void', 26 | filename=None, 27 | file=DESCRIPTOR, 28 | containing_type=None, 29 | fields=[ 30 | ], 31 | extensions=[ 32 | ], 33 | nested_types=[], 34 | enum_types=[ 35 | ], 36 | options=None, 37 | is_extendable=False, 38 | extension_ranges=[], 39 | serialized_start=33, 40 | serialized_end=39, 41 | ) 42 | 43 | 44 | _ECHOSTRING = _descriptor.Descriptor( 45 | name='EchoString', 46 | full_name='nightfade.EchoString', 47 | filename=None, 48 | file=DESCRIPTOR, 49 | containing_type=None, 50 | fields=[ 51 | _descriptor.FieldDescriptor( 52 | name='message', full_name='nightfade.EchoString.message', index=0, 53 | number=1, type=9, cpp_type=9, label=2, 54 | has_default_value=False, default_value=unicode("", "utf-8"), 55 | message_type=None, enum_type=None, containing_type=None, 56 | is_extension=False, extension_scope=None, 57 | options=None), 58 | ], 59 | extensions=[ 60 | ], 61 | nested_types=[], 62 | enum_types=[ 63 | ], 64 | options=None, 65 | is_extendable=False, 66 | extension_ranges=[], 67 | serialized_start=41, 68 | serialized_end=70, 69 | ) 70 | 71 | DESCRIPTOR.message_types_by_name['Void'] = _VOID 72 | DESCRIPTOR.message_types_by_name['EchoString'] = _ECHOSTRING 73 | 74 | class Void(_message.Message): 75 | __metaclass__ = _reflection.GeneratedProtocolMessageType 76 | DESCRIPTOR = _VOID 77 | 78 | # @@protoc_insertion_point(class_scope:nightfade.Void) 79 | 80 | class EchoString(_message.Message): 81 | __metaclass__ = _reflection.GeneratedProtocolMessageType 82 | DESCRIPTOR = _ECHOSTRING 83 | 84 | # @@protoc_insertion_point(class_scope:nightfade.EchoString) 85 | 86 | 87 | DESCRIPTOR.has_options = True 88 | DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), '\200\001\001\220\001\001') 89 | 90 | _IECHOSERVICE = _descriptor.ServiceDescriptor( 91 | name='IEchoService', 92 | full_name='nightfade.IEchoService', 93 | file=DESCRIPTOR, 94 | index=0, 95 | options=None, 96 | serialized_start=72, 97 | serialized_end=134, 98 | methods=[ 99 | _descriptor.MethodDescriptor( 100 | name='echo', 101 | full_name='nightfade.IEchoService.echo', 102 | index=0, 103 | containing_service=None, 104 | input_type=_ECHOSTRING, 105 | output_type=_VOID, 106 | options=None, 107 | ), 108 | ]) 109 | 110 | class IEchoService(_service.Service): 111 | __metaclass__ = service_reflection.GeneratedServiceType 112 | DESCRIPTOR = _IECHOSERVICE 113 | class IEchoService_Stub(IEchoService): 114 | __metaclass__ = service_reflection.GeneratedServiceStubType 115 | DESCRIPTOR = _IECHOSERVICE 116 | 117 | 118 | _IECHOCLIENT = _descriptor.ServiceDescriptor( 119 | name='IEchoClient', 120 | full_name='nightfade.IEchoClient', 121 | file=DESCRIPTOR, 122 | index=1, 123 | options=None, 124 | serialized_start=136, 125 | serialized_end=200, 126 | methods=[ 127 | _descriptor.MethodDescriptor( 128 | name='respond', 129 | full_name='nightfade.IEchoClient.respond', 130 | index=0, 131 | containing_service=None, 132 | input_type=_ECHOSTRING, 133 | output_type=_VOID, 134 | options=None, 135 | ), 136 | ]) 137 | 138 | class IEchoClient(_service.Service): 139 | __metaclass__ = service_reflection.GeneratedServiceType 140 | DESCRIPTOR = _IECHOCLIENT 141 | class IEchoClient_Stub(IEchoClient): 142 | __metaclass__ = service_reflection.GeneratedServiceStubType 143 | DESCRIPTOR = _IECHOCLIENT 144 | 145 | # @@protoc_insertion_point(module_scope) 146 | -------------------------------------------------------------------------------- /google/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nightfade/protobuf-RPC/5c6084f6d5a6b9affc56cddab6413b4b662e973b/google/__init__.py -------------------------------------------------------------------------------- /google/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nightfade/protobuf-RPC/5c6084f6d5a6b9affc56cddab6413b4b662e973b/google/__init__.pyc -------------------------------------------------------------------------------- /google/protobuf/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nightfade/protobuf-RPC/5c6084f6d5a6b9affc56cddab6413b4b662e973b/google/protobuf/__init__.py -------------------------------------------------------------------------------- /google/protobuf/compiler/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nightfade/protobuf-RPC/5c6084f6d5a6b9affc56cddab6413b4b662e973b/google/protobuf/compiler/__init__.py -------------------------------------------------------------------------------- /google/protobuf/compiler/plugin_pb2.py: -------------------------------------------------------------------------------- 1 | # Generated by the protocol buffer compiler. DO NOT EDIT! 2 | # source: google/protobuf/compiler/plugin.proto 3 | 4 | from google.protobuf import descriptor as _descriptor 5 | from google.protobuf import message as _message 6 | from google.protobuf import reflection as _reflection 7 | from google.protobuf import descriptor_pb2 8 | # @@protoc_insertion_point(imports) 9 | 10 | 11 | import google.protobuf.descriptor_pb2 12 | 13 | 14 | DESCRIPTOR = _descriptor.FileDescriptor( 15 | name='google/protobuf/compiler/plugin.proto', 16 | package='google.protobuf.compiler', 17 | serialized_pb='\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"}\n\x14\x43odeGeneratorRequest\x12\x18\n\x10\x66ile_to_generate\x18\x01 \x03(\t\x12\x11\n\tparameter\x18\x02 \x01(\t\x12\x38\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\"\xaa\x01\n\x15\x43odeGeneratorResponse\x12\r\n\x05\x65rror\x18\x01 \x01(\t\x12\x42\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.File\x1a>\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x17\n\x0finsertion_point\x18\x02 \x01(\t\x12\x0f\n\x07\x63ontent\x18\x0f \x01(\tB,\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtos') 18 | 19 | 20 | 21 | 22 | _CODEGENERATORREQUEST = _descriptor.Descriptor( 23 | name='CodeGeneratorRequest', 24 | full_name='google.protobuf.compiler.CodeGeneratorRequest', 25 | filename=None, 26 | file=DESCRIPTOR, 27 | containing_type=None, 28 | fields=[ 29 | _descriptor.FieldDescriptor( 30 | name='file_to_generate', full_name='google.protobuf.compiler.CodeGeneratorRequest.file_to_generate', index=0, 31 | number=1, type=9, cpp_type=9, label=3, 32 | has_default_value=False, default_value=[], 33 | message_type=None, enum_type=None, containing_type=None, 34 | is_extension=False, extension_scope=None, 35 | options=None), 36 | _descriptor.FieldDescriptor( 37 | name='parameter', full_name='google.protobuf.compiler.CodeGeneratorRequest.parameter', index=1, 38 | number=2, type=9, cpp_type=9, label=1, 39 | has_default_value=False, default_value=unicode("", "utf-8"), 40 | message_type=None, enum_type=None, containing_type=None, 41 | is_extension=False, extension_scope=None, 42 | options=None), 43 | _descriptor.FieldDescriptor( 44 | name='proto_file', full_name='google.protobuf.compiler.CodeGeneratorRequest.proto_file', index=2, 45 | number=15, type=11, cpp_type=10, label=3, 46 | has_default_value=False, default_value=[], 47 | message_type=None, enum_type=None, containing_type=None, 48 | is_extension=False, extension_scope=None, 49 | options=None), 50 | ], 51 | extensions=[ 52 | ], 53 | nested_types=[], 54 | enum_types=[ 55 | ], 56 | options=None, 57 | is_extendable=False, 58 | extension_ranges=[], 59 | serialized_start=101, 60 | serialized_end=226, 61 | ) 62 | 63 | 64 | _CODEGENERATORRESPONSE_FILE = _descriptor.Descriptor( 65 | name='File', 66 | full_name='google.protobuf.compiler.CodeGeneratorResponse.File', 67 | filename=None, 68 | file=DESCRIPTOR, 69 | containing_type=None, 70 | fields=[ 71 | _descriptor.FieldDescriptor( 72 | name='name', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.name', index=0, 73 | number=1, type=9, cpp_type=9, label=1, 74 | has_default_value=False, default_value=unicode("", "utf-8"), 75 | message_type=None, enum_type=None, containing_type=None, 76 | is_extension=False, extension_scope=None, 77 | options=None), 78 | _descriptor.FieldDescriptor( 79 | name='insertion_point', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.insertion_point', index=1, 80 | number=2, type=9, cpp_type=9, label=1, 81 | has_default_value=False, default_value=unicode("", "utf-8"), 82 | message_type=None, enum_type=None, containing_type=None, 83 | is_extension=False, extension_scope=None, 84 | options=None), 85 | _descriptor.FieldDescriptor( 86 | name='content', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.content', index=2, 87 | number=15, type=9, cpp_type=9, label=1, 88 | has_default_value=False, default_value=unicode("", "utf-8"), 89 | message_type=None, enum_type=None, containing_type=None, 90 | is_extension=False, extension_scope=None, 91 | options=None), 92 | ], 93 | extensions=[ 94 | ], 95 | nested_types=[], 96 | enum_types=[ 97 | ], 98 | options=None, 99 | is_extendable=False, 100 | extension_ranges=[], 101 | serialized_start=337, 102 | serialized_end=399, 103 | ) 104 | 105 | _CODEGENERATORRESPONSE = _descriptor.Descriptor( 106 | name='CodeGeneratorResponse', 107 | full_name='google.protobuf.compiler.CodeGeneratorResponse', 108 | filename=None, 109 | file=DESCRIPTOR, 110 | containing_type=None, 111 | fields=[ 112 | _descriptor.FieldDescriptor( 113 | name='error', full_name='google.protobuf.compiler.CodeGeneratorResponse.error', index=0, 114 | number=1, type=9, cpp_type=9, label=1, 115 | has_default_value=False, default_value=unicode("", "utf-8"), 116 | message_type=None, enum_type=None, containing_type=None, 117 | is_extension=False, extension_scope=None, 118 | options=None), 119 | _descriptor.FieldDescriptor( 120 | name='file', full_name='google.protobuf.compiler.CodeGeneratorResponse.file', index=1, 121 | number=15, type=11, cpp_type=10, label=3, 122 | has_default_value=False, default_value=[], 123 | message_type=None, enum_type=None, containing_type=None, 124 | is_extension=False, extension_scope=None, 125 | options=None), 126 | ], 127 | extensions=[ 128 | ], 129 | nested_types=[_CODEGENERATORRESPONSE_FILE, ], 130 | enum_types=[ 131 | ], 132 | options=None, 133 | is_extendable=False, 134 | extension_ranges=[], 135 | serialized_start=229, 136 | serialized_end=399, 137 | ) 138 | 139 | _CODEGENERATORREQUEST.fields_by_name['proto_file'].message_type = google.protobuf.descriptor_pb2._FILEDESCRIPTORPROTO 140 | _CODEGENERATORRESPONSE_FILE.containing_type = _CODEGENERATORRESPONSE; 141 | _CODEGENERATORRESPONSE.fields_by_name['file'].message_type = _CODEGENERATORRESPONSE_FILE 142 | DESCRIPTOR.message_types_by_name['CodeGeneratorRequest'] = _CODEGENERATORREQUEST 143 | DESCRIPTOR.message_types_by_name['CodeGeneratorResponse'] = _CODEGENERATORRESPONSE 144 | 145 | class CodeGeneratorRequest(_message.Message): 146 | __metaclass__ = _reflection.GeneratedProtocolMessageType 147 | DESCRIPTOR = _CODEGENERATORREQUEST 148 | 149 | # @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorRequest) 150 | 151 | class CodeGeneratorResponse(_message.Message): 152 | __metaclass__ = _reflection.GeneratedProtocolMessageType 153 | 154 | class File(_message.Message): 155 | __metaclass__ = _reflection.GeneratedProtocolMessageType 156 | DESCRIPTOR = _CODEGENERATORRESPONSE_FILE 157 | 158 | # @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorResponse.File) 159 | DESCRIPTOR = _CODEGENERATORRESPONSE 160 | 161 | # @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorResponse) 162 | 163 | 164 | DESCRIPTOR.has_options = True 165 | DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), '\n\034com.google.protobuf.compilerB\014PluginProtos') 166 | # @@protoc_insertion_point(module_scope) 167 | -------------------------------------------------------------------------------- /google/protobuf/descriptor_database.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Provides a container for DescriptorProtos.""" 32 | 33 | __author__ = 'matthewtoia@google.com (Matt Toia)' 34 | 35 | 36 | class DescriptorDatabase(object): 37 | """A container accepting FileDescriptorProtos and maps DescriptorProtos.""" 38 | 39 | def __init__(self): 40 | self._file_desc_protos_by_file = {} 41 | self._file_desc_protos_by_symbol = {} 42 | 43 | def Add(self, file_desc_proto): 44 | """Adds the FileDescriptorProto and its types to this database. 45 | 46 | Args: 47 | file_desc_proto: The FileDescriptorProto to add. 48 | """ 49 | 50 | self._file_desc_protos_by_file[file_desc_proto.name] = file_desc_proto 51 | package = file_desc_proto.package 52 | for message in file_desc_proto.message_type: 53 | self._file_desc_protos_by_symbol.update( 54 | (name, file_desc_proto) for name in _ExtractSymbols(message, package)) 55 | for enum in file_desc_proto.enum_type: 56 | self._file_desc_protos_by_symbol[ 57 | '.'.join((package, enum.name))] = file_desc_proto 58 | 59 | def FindFileByName(self, name): 60 | """Finds the file descriptor proto by file name. 61 | 62 | Typically the file name is a relative path ending to a .proto file. The 63 | proto with the given name will have to have been added to this database 64 | using the Add method or else an error will be raised. 65 | 66 | Args: 67 | name: The file name to find. 68 | 69 | Returns: 70 | The file descriptor proto matching the name. 71 | 72 | Raises: 73 | KeyError if no file by the given name was added. 74 | """ 75 | 76 | return self._file_desc_protos_by_file[name] 77 | 78 | def FindFileContainingSymbol(self, symbol): 79 | """Finds the file descriptor proto containing the specified symbol. 80 | 81 | The symbol should be a fully qualified name including the file descriptor's 82 | package and any containing messages. Some examples: 83 | 84 | 'some.package.name.Message' 85 | 'some.package.name.Message.NestedEnum' 86 | 87 | The file descriptor proto containing the specified symbol must be added to 88 | this database using the Add method or else an error will be raised. 89 | 90 | Args: 91 | symbol: The fully qualified symbol name. 92 | 93 | Returns: 94 | The file descriptor proto containing the symbol. 95 | 96 | Raises: 97 | KeyError if no file contains the specified symbol. 98 | """ 99 | 100 | return self._file_desc_protos_by_symbol[symbol] 101 | 102 | 103 | def _ExtractSymbols(desc_proto, package): 104 | """Pulls out all the symbols from a descriptor proto. 105 | 106 | Args: 107 | desc_proto: The proto to extract symbols from. 108 | package: The package containing the descriptor type. 109 | 110 | Yields: 111 | The fully qualified name found in the descriptor. 112 | """ 113 | 114 | message_name = '.'.join((package, desc_proto.name)) 115 | yield message_name 116 | for nested_type in desc_proto.nested_type: 117 | for symbol in _ExtractSymbols(nested_type, message_name): 118 | yield symbol 119 | for enum_type in desc_proto.enum_type: 120 | yield '.'.join((message_name, enum_type.name)) 121 | -------------------------------------------------------------------------------- /google/protobuf/descriptor_pool.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Provides DescriptorPool to use as a container for proto2 descriptors. 32 | 33 | The DescriptorPool is used in conjection with a DescriptorDatabase to maintain 34 | a collection of protocol buffer descriptors for use when dynamically creating 35 | message types at runtime. 36 | 37 | For most applications protocol buffers should be used via modules generated by 38 | the protocol buffer compiler tool. This should only be used when the type of 39 | protocol buffers used in an application or library cannot be predetermined. 40 | 41 | Below is a straightforward example on how to use this class: 42 | 43 | pool = DescriptorPool() 44 | file_descriptor_protos = [ ... ] 45 | for file_descriptor_proto in file_descriptor_protos: 46 | pool.Add(file_descriptor_proto) 47 | my_message_descriptor = pool.FindMessageTypeByName('some.package.MessageType') 48 | 49 | The message descriptor can be used in conjunction with the message_factory 50 | module in order to create a protocol buffer class that can be encoded and 51 | decoded. 52 | """ 53 | 54 | __author__ = 'matthewtoia@google.com (Matt Toia)' 55 | 56 | from google.protobuf import descriptor_pb2 57 | from google.protobuf import descriptor 58 | from google.protobuf import descriptor_database 59 | 60 | 61 | class DescriptorPool(object): 62 | """A collection of protobufs dynamically constructed by descriptor protos.""" 63 | 64 | def __init__(self, descriptor_db=None): 65 | """Initializes a Pool of proto buffs. 66 | 67 | The descriptor_db argument to the constructor is provided to allow 68 | specialized file descriptor proto lookup code to be triggered on demand. An 69 | example would be an implementation which will read and compile a file 70 | specified in a call to FindFileByName() and not require the call to Add() 71 | at all. Results from this database will be cached internally here as well. 72 | 73 | Args: 74 | descriptor_db: A secondary source of file descriptors. 75 | """ 76 | 77 | self._internal_db = descriptor_database.DescriptorDatabase() 78 | self._descriptor_db = descriptor_db 79 | self._descriptors = {} 80 | self._enum_descriptors = {} 81 | self._file_descriptors = {} 82 | 83 | def Add(self, file_desc_proto): 84 | """Adds the FileDescriptorProto and its types to this pool. 85 | 86 | Args: 87 | file_desc_proto: The FileDescriptorProto to add. 88 | """ 89 | 90 | self._internal_db.Add(file_desc_proto) 91 | 92 | def FindFileByName(self, file_name): 93 | """Gets a FileDescriptor by file name. 94 | 95 | Args: 96 | file_name: The path to the file to get a descriptor for. 97 | 98 | Returns: 99 | A FileDescriptor for the named file. 100 | 101 | Raises: 102 | KeyError: if the file can not be found in the pool. 103 | """ 104 | 105 | try: 106 | file_proto = self._internal_db.FindFileByName(file_name) 107 | except KeyError as error: 108 | if self._descriptor_db: 109 | file_proto = self._descriptor_db.FindFileByName(file_name) 110 | else: 111 | raise error 112 | if not file_proto: 113 | raise KeyError('Cannot find a file named %s' % file_name) 114 | return self._ConvertFileProtoToFileDescriptor(file_proto) 115 | 116 | def FindFileContainingSymbol(self, symbol): 117 | """Gets the FileDescriptor for the file containing the specified symbol. 118 | 119 | Args: 120 | symbol: The name of the symbol to search for. 121 | 122 | Returns: 123 | A FileDescriptor that contains the specified symbol. 124 | 125 | Raises: 126 | KeyError: if the file can not be found in the pool. 127 | """ 128 | 129 | try: 130 | file_proto = self._internal_db.FindFileContainingSymbol(symbol) 131 | except KeyError as error: 132 | if self._descriptor_db: 133 | file_proto = self._descriptor_db.FindFileContainingSymbol(symbol) 134 | else: 135 | raise error 136 | if not file_proto: 137 | raise KeyError('Cannot find a file containing %s' % symbol) 138 | return self._ConvertFileProtoToFileDescriptor(file_proto) 139 | 140 | def FindMessageTypeByName(self, full_name): 141 | """Loads the named descriptor from the pool. 142 | 143 | Args: 144 | full_name: The full name of the descriptor to load. 145 | 146 | Returns: 147 | The descriptor for the named type. 148 | """ 149 | 150 | full_name = full_name.lstrip('.') # fix inconsistent qualified name formats 151 | if full_name not in self._descriptors: 152 | self.FindFileContainingSymbol(full_name) 153 | return self._descriptors[full_name] 154 | 155 | def FindEnumTypeByName(self, full_name): 156 | """Loads the named enum descriptor from the pool. 157 | 158 | Args: 159 | full_name: The full name of the enum descriptor to load. 160 | 161 | Returns: 162 | The enum descriptor for the named type. 163 | """ 164 | 165 | full_name = full_name.lstrip('.') # fix inconsistent qualified name formats 166 | if full_name not in self._enum_descriptors: 167 | self.FindFileContainingSymbol(full_name) 168 | return self._enum_descriptors[full_name] 169 | 170 | def _ConvertFileProtoToFileDescriptor(self, file_proto): 171 | """Creates a FileDescriptor from a proto or returns a cached copy. 172 | 173 | This method also has the side effect of loading all the symbols found in 174 | the file into the appropriate dictionaries in the pool. 175 | 176 | Args: 177 | file_proto: The proto to convert. 178 | 179 | Returns: 180 | A FileDescriptor matching the passed in proto. 181 | """ 182 | 183 | if file_proto.name not in self._file_descriptors: 184 | file_descriptor = descriptor.FileDescriptor( 185 | name=file_proto.name, 186 | package=file_proto.package, 187 | options=file_proto.options, 188 | serialized_pb=file_proto.SerializeToString()) 189 | scope = {} 190 | dependencies = list(self._GetDeps(file_proto)) 191 | 192 | for dependency in dependencies: 193 | dep_desc = self.FindFileByName(dependency.name) 194 | dep_proto = descriptor_pb2.FileDescriptorProto.FromString( 195 | dep_desc.serialized_pb) 196 | package = '.' + dep_proto.package 197 | package_prefix = package + '.' 198 | 199 | def _strip_package(symbol): 200 | if symbol.startswith(package_prefix): 201 | return symbol[len(package_prefix):] 202 | return symbol 203 | 204 | symbols = list(self._ExtractSymbols(dep_proto.message_type, package)) 205 | scope.update(symbols) 206 | scope.update((_strip_package(k), v) for k, v in symbols) 207 | 208 | symbols = list(self._ExtractEnums(dep_proto.enum_type, package)) 209 | scope.update(symbols) 210 | scope.update((_strip_package(k), v) for k, v in symbols) 211 | 212 | for message_type in file_proto.message_type: 213 | message_desc = self._ConvertMessageDescriptor( 214 | message_type, file_proto.package, file_descriptor, scope) 215 | file_descriptor.message_types_by_name[message_desc.name] = message_desc 216 | for enum_type in file_proto.enum_type: 217 | self._ConvertEnumDescriptor(enum_type, file_proto.package, 218 | file_descriptor, None, scope) 219 | for desc_proto in self._ExtractMessages(file_proto.message_type): 220 | self._SetFieldTypes(desc_proto, scope) 221 | 222 | for desc_proto in file_proto.message_type: 223 | desc = scope[desc_proto.name] 224 | file_descriptor.message_types_by_name[desc_proto.name] = desc 225 | self.Add(file_proto) 226 | self._file_descriptors[file_proto.name] = file_descriptor 227 | 228 | return self._file_descriptors[file_proto.name] 229 | 230 | def _ConvertMessageDescriptor(self, desc_proto, package=None, file_desc=None, 231 | scope=None): 232 | """Adds the proto to the pool in the specified package. 233 | 234 | Args: 235 | desc_proto: The descriptor_pb2.DescriptorProto protobuf message. 236 | package: The package the proto should be located in. 237 | file_desc: The file containing this message. 238 | scope: Dict mapping short and full symbols to message and enum types. 239 | 240 | Returns: 241 | The added descriptor. 242 | """ 243 | 244 | if package: 245 | desc_name = '.'.join((package, desc_proto.name)) 246 | else: 247 | desc_name = desc_proto.name 248 | 249 | if file_desc is None: 250 | file_name = None 251 | else: 252 | file_name = file_desc.name 253 | 254 | if scope is None: 255 | scope = {} 256 | 257 | nested = [ 258 | self._ConvertMessageDescriptor(nested, desc_name, file_desc, scope) 259 | for nested in desc_proto.nested_type] 260 | enums = [ 261 | self._ConvertEnumDescriptor(enum, desc_name, file_desc, None, scope) 262 | for enum in desc_proto.enum_type] 263 | fields = [self._MakeFieldDescriptor(field, desc_name, index) 264 | for index, field in enumerate(desc_proto.field)] 265 | extensions = [self._MakeFieldDescriptor(extension, desc_name, True) 266 | for index, extension in enumerate(desc_proto.extension)] 267 | extension_ranges = [(r.start, r.end) for r in desc_proto.extension_range] 268 | if extension_ranges: 269 | is_extendable = True 270 | else: 271 | is_extendable = False 272 | desc = descriptor.Descriptor( 273 | name=desc_proto.name, 274 | full_name=desc_name, 275 | filename=file_name, 276 | containing_type=None, 277 | fields=fields, 278 | nested_types=nested, 279 | enum_types=enums, 280 | extensions=extensions, 281 | options=desc_proto.options, 282 | is_extendable=is_extendable, 283 | extension_ranges=extension_ranges, 284 | file=file_desc, 285 | serialized_start=None, 286 | serialized_end=None) 287 | for nested in desc.nested_types: 288 | nested.containing_type = desc 289 | for enum in desc.enum_types: 290 | enum.containing_type = desc 291 | scope[desc_proto.name] = desc 292 | scope['.' + desc_name] = desc 293 | self._descriptors[desc_name] = desc 294 | return desc 295 | 296 | def _ConvertEnumDescriptor(self, enum_proto, package=None, file_desc=None, 297 | containing_type=None, scope=None): 298 | """Make a protobuf EnumDescriptor given an EnumDescriptorProto protobuf. 299 | 300 | Args: 301 | enum_proto: The descriptor_pb2.EnumDescriptorProto protobuf message. 302 | package: Optional package name for the new message EnumDescriptor. 303 | file_desc: The file containing the enum descriptor. 304 | containing_type: The type containing this enum. 305 | scope: Scope containing available types. 306 | 307 | Returns: 308 | The added descriptor 309 | """ 310 | 311 | if package: 312 | enum_name = '.'.join((package, enum_proto.name)) 313 | else: 314 | enum_name = enum_proto.name 315 | 316 | if file_desc is None: 317 | file_name = None 318 | else: 319 | file_name = file_desc.name 320 | 321 | values = [self._MakeEnumValueDescriptor(value, index) 322 | for index, value in enumerate(enum_proto.value)] 323 | desc = descriptor.EnumDescriptor(name=enum_proto.name, 324 | full_name=enum_name, 325 | filename=file_name, 326 | file=file_desc, 327 | values=values, 328 | containing_type=containing_type, 329 | options=enum_proto.options) 330 | scope[enum_proto.name] = desc 331 | scope['.%s' % enum_name] = desc 332 | self._enum_descriptors[enum_name] = desc 333 | return desc 334 | 335 | def _MakeFieldDescriptor(self, field_proto, message_name, index, 336 | is_extension=False): 337 | """Creates a field descriptor from a FieldDescriptorProto. 338 | 339 | For message and enum type fields, this method will do a look up 340 | in the pool for the appropriate descriptor for that type. If it 341 | is unavailable, it will fall back to the _source function to 342 | create it. If this type is still unavailable, construction will 343 | fail. 344 | 345 | Args: 346 | field_proto: The proto describing the field. 347 | message_name: The name of the containing message. 348 | index: Index of the field 349 | is_extension: Indication that this field is for an extension. 350 | 351 | Returns: 352 | An initialized FieldDescriptor object 353 | """ 354 | 355 | if message_name: 356 | full_name = '.'.join((message_name, field_proto.name)) 357 | else: 358 | full_name = field_proto.name 359 | 360 | return descriptor.FieldDescriptor( 361 | name=field_proto.name, 362 | full_name=full_name, 363 | index=index, 364 | number=field_proto.number, 365 | type=field_proto.type, 366 | cpp_type=None, 367 | message_type=None, 368 | enum_type=None, 369 | containing_type=None, 370 | label=field_proto.label, 371 | has_default_value=False, 372 | default_value=None, 373 | is_extension=is_extension, 374 | extension_scope=None, 375 | options=field_proto.options) 376 | 377 | def _SetFieldTypes(self, desc_proto, scope): 378 | """Sets the field's type, cpp_type, message_type and enum_type. 379 | 380 | Args: 381 | desc_proto: The message descriptor to update. 382 | scope: Enclosing scope of available types. 383 | """ 384 | 385 | desc = scope[desc_proto.name] 386 | for field_proto, field_desc in zip(desc_proto.field, desc.fields): 387 | if field_proto.type_name: 388 | type_name = field_proto.type_name 389 | if type_name not in scope: 390 | type_name = '.' + type_name 391 | desc = scope[type_name] 392 | else: 393 | desc = None 394 | 395 | if not field_proto.HasField('type'): 396 | if isinstance(desc, descriptor.Descriptor): 397 | field_proto.type = descriptor.FieldDescriptor.TYPE_MESSAGE 398 | else: 399 | field_proto.type = descriptor.FieldDescriptor.TYPE_ENUM 400 | 401 | field_desc.cpp_type = descriptor.FieldDescriptor.ProtoTypeToCppProtoType( 402 | field_proto.type) 403 | 404 | if (field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE 405 | or field_proto.type == descriptor.FieldDescriptor.TYPE_GROUP): 406 | field_desc.message_type = desc 407 | 408 | if field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: 409 | field_desc.enum_type = desc 410 | 411 | if field_proto.label == descriptor.FieldDescriptor.LABEL_REPEATED: 412 | field_desc.has_default = False 413 | field_desc.default_value = [] 414 | elif field_proto.HasField('default_value'): 415 | field_desc.has_default = True 416 | if (field_proto.type == descriptor.FieldDescriptor.TYPE_DOUBLE or 417 | field_proto.type == descriptor.FieldDescriptor.TYPE_FLOAT): 418 | field_desc.default_value = float(field_proto.default_value) 419 | elif field_proto.type == descriptor.FieldDescriptor.TYPE_STRING: 420 | field_desc.default_value = field_proto.default_value 421 | elif field_proto.type == descriptor.FieldDescriptor.TYPE_BOOL: 422 | field_desc.default_value = field_proto.default_value.lower() == 'true' 423 | elif field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: 424 | field_desc.default_value = field_desc.enum_type.values_by_name[ 425 | field_proto.default_value].index 426 | else: 427 | field_desc.default_value = int(field_proto.default_value) 428 | else: 429 | field_desc.has_default = False 430 | field_desc.default_value = None 431 | 432 | field_desc.type = field_proto.type 433 | 434 | for nested_type in desc_proto.nested_type: 435 | self._SetFieldTypes(nested_type, scope) 436 | 437 | def _MakeEnumValueDescriptor(self, value_proto, index): 438 | """Creates a enum value descriptor object from a enum value proto. 439 | 440 | Args: 441 | value_proto: The proto describing the enum value. 442 | index: The index of the enum value. 443 | 444 | Returns: 445 | An initialized EnumValueDescriptor object. 446 | """ 447 | 448 | return descriptor.EnumValueDescriptor( 449 | name=value_proto.name, 450 | index=index, 451 | number=value_proto.number, 452 | options=value_proto.options, 453 | type=None) 454 | 455 | def _ExtractSymbols(self, desc_protos, package): 456 | """Pulls out all the symbols from descriptor protos. 457 | 458 | Args: 459 | desc_protos: The protos to extract symbols from. 460 | package: The package containing the descriptor type. 461 | Yields: 462 | A two element tuple of the type name and descriptor object. 463 | """ 464 | 465 | for desc_proto in desc_protos: 466 | if package: 467 | message_name = '.'.join((package, desc_proto.name)) 468 | else: 469 | message_name = desc_proto.name 470 | message_desc = self.FindMessageTypeByName(message_name) 471 | yield (message_name, message_desc) 472 | for symbol in self._ExtractSymbols(desc_proto.nested_type, message_name): 473 | yield symbol 474 | for symbol in self._ExtractEnums(desc_proto.enum_type, message_name): 475 | yield symbol 476 | 477 | def _ExtractEnums(self, enum_protos, package): 478 | """Pulls out all the symbols from enum protos. 479 | 480 | Args: 481 | enum_protos: The protos to extract symbols from. 482 | package: The package containing the enum type. 483 | 484 | Yields: 485 | A two element tuple of the type name and enum descriptor object. 486 | """ 487 | 488 | for enum_proto in enum_protos: 489 | if package: 490 | enum_name = '.'.join((package, enum_proto.name)) 491 | else: 492 | enum_name = enum_proto.name 493 | enum_desc = self.FindEnumTypeByName(enum_name) 494 | yield (enum_name, enum_desc) 495 | 496 | def _ExtractMessages(self, desc_protos): 497 | """Pulls out all the message protos from descriptos. 498 | 499 | Args: 500 | desc_protos: The protos to extract symbols from. 501 | 502 | Yields: 503 | Descriptor protos. 504 | """ 505 | 506 | for desc_proto in desc_protos: 507 | yield desc_proto 508 | for message in self._ExtractMessages(desc_proto.nested_type): 509 | yield message 510 | 511 | def _GetDeps(self, file_proto): 512 | """Recursively finds dependencies for file protos. 513 | 514 | Args: 515 | file_proto: The proto to get dependencies from. 516 | 517 | Yields: 518 | Each direct and indirect dependency. 519 | """ 520 | 521 | for dependency in file_proto.dependency: 522 | dep_desc = self.FindFileByName(dependency) 523 | dep_proto = descriptor_pb2.FileDescriptorProto.FromString( 524 | dep_desc.serialized_pb) 525 | yield dep_proto 526 | for parent_dep in self._GetDeps(dep_proto): 527 | yield parent_dep 528 | -------------------------------------------------------------------------------- /google/protobuf/internal/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nightfade/protobuf-RPC/5c6084f6d5a6b9affc56cddab6413b4b662e973b/google/protobuf/internal/__init__.py -------------------------------------------------------------------------------- /google/protobuf/internal/api_implementation.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """ 32 | This module is the central entity that determines which implementation of the 33 | API is used. 34 | """ 35 | 36 | __author__ = 'petar@google.com (Petar Petrov)' 37 | 38 | import os 39 | # This environment variable can be used to switch to a certain implementation 40 | # of the Python API. Right now only 'python' and 'cpp' are valid values. Any 41 | # other value will be ignored. 42 | _implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION', 43 | 'python') 44 | 45 | 46 | if _implementation_type != 'python': 47 | # For now, by default use the pure-Python implementation. 48 | # The code below checks if the C extension is available and 49 | # uses it if it is available. 50 | _implementation_type = 'cpp' 51 | ## Determine automatically which implementation to use. 52 | #try: 53 | # from google.protobuf.internal import cpp_message 54 | # _implementation_type = 'cpp' 55 | #except ImportError, e: 56 | # _implementation_type = 'python' 57 | 58 | 59 | # This environment variable can be used to switch between the two 60 | # 'cpp' implementations. Right now only 1 and 2 are valid values. Any 61 | # other value will be ignored. 62 | _implementation_version_str = os.getenv( 63 | 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION', 64 | '1') 65 | 66 | 67 | if _implementation_version_str not in ('1', '2'): 68 | raise ValueError( 69 | "unsupported PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION: '" + 70 | _implementation_version_str + "' (supported versions: 1, 2)" 71 | ) 72 | 73 | 74 | _implementation_version = int(_implementation_version_str) 75 | 76 | 77 | 78 | # Usage of this function is discouraged. Clients shouldn't care which 79 | # implementation of the API is in use. Note that there is no guarantee 80 | # that differences between APIs will be maintained. 81 | # Please don't use this function if possible. 82 | def Type(): 83 | return _implementation_type 84 | 85 | # See comment on 'Type' above. 86 | def Version(): 87 | return _implementation_version 88 | -------------------------------------------------------------------------------- /google/protobuf/internal/containers.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Contains container classes to represent different protocol buffer types. 32 | 33 | This file defines container classes which represent categories of protocol 34 | buffer field types which need extra maintenance. Currently these categories 35 | are: 36 | - Repeated scalar fields - These are all repeated fields which aren't 37 | composite (e.g. they are of simple types like int32, string, etc). 38 | - Repeated composite fields - Repeated fields which are composite. This 39 | includes groups and nested messages. 40 | """ 41 | 42 | __author__ = 'petar@google.com (Petar Petrov)' 43 | 44 | 45 | class BaseContainer(object): 46 | 47 | """Base container class.""" 48 | 49 | # Minimizes memory usage and disallows assignment to other attributes. 50 | __slots__ = ['_message_listener', '_values'] 51 | 52 | def __init__(self, message_listener): 53 | """ 54 | Args: 55 | message_listener: A MessageListener implementation. 56 | The RepeatedScalarFieldContainer will call this object's 57 | Modified() method when it is modified. 58 | """ 59 | self._message_listener = message_listener 60 | self._values = [] 61 | 62 | def __getitem__(self, key): 63 | """Retrieves item by the specified key.""" 64 | return self._values[key] 65 | 66 | def __len__(self): 67 | """Returns the number of elements in the container.""" 68 | return len(self._values) 69 | 70 | def __ne__(self, other): 71 | """Checks if another instance isn't equal to this one.""" 72 | # The concrete classes should define __eq__. 73 | return not self == other 74 | 75 | def __hash__(self): 76 | raise TypeError('unhashable object') 77 | 78 | def __repr__(self): 79 | return repr(self._values) 80 | 81 | def sort(self, *args, **kwargs): 82 | # Continue to support the old sort_function keyword argument. 83 | # This is expected to be a rare occurrence, so use LBYL to avoid 84 | # the overhead of actually catching KeyError. 85 | if 'sort_function' in kwargs: 86 | kwargs['cmp'] = kwargs.pop('sort_function') 87 | self._values.sort(*args, **kwargs) 88 | 89 | 90 | class RepeatedScalarFieldContainer(BaseContainer): 91 | 92 | """Simple, type-checked, list-like container for holding repeated scalars.""" 93 | 94 | # Disallows assignment to other attributes. 95 | __slots__ = ['_type_checker'] 96 | 97 | def __init__(self, message_listener, type_checker): 98 | """ 99 | Args: 100 | message_listener: A MessageListener implementation. 101 | The RepeatedScalarFieldContainer will call this object's 102 | Modified() method when it is modified. 103 | type_checker: A type_checkers.ValueChecker instance to run on elements 104 | inserted into this container. 105 | """ 106 | super(RepeatedScalarFieldContainer, self).__init__(message_listener) 107 | self._type_checker = type_checker 108 | 109 | def append(self, value): 110 | """Appends an item to the list. Similar to list.append().""" 111 | self._type_checker.CheckValue(value) 112 | self._values.append(value) 113 | if not self._message_listener.dirty: 114 | self._message_listener.Modified() 115 | 116 | def insert(self, key, value): 117 | """Inserts the item at the specified position. Similar to list.insert().""" 118 | self._type_checker.CheckValue(value) 119 | self._values.insert(key, value) 120 | if not self._message_listener.dirty: 121 | self._message_listener.Modified() 122 | 123 | def extend(self, elem_seq): 124 | """Extends by appending the given sequence. Similar to list.extend().""" 125 | if not elem_seq: 126 | return 127 | 128 | new_values = [] 129 | for elem in elem_seq: 130 | self._type_checker.CheckValue(elem) 131 | new_values.append(elem) 132 | self._values.extend(new_values) 133 | self._message_listener.Modified() 134 | 135 | def MergeFrom(self, other): 136 | """Appends the contents of another repeated field of the same type to this 137 | one. We do not check the types of the individual fields. 138 | """ 139 | self._values.extend(other._values) 140 | self._message_listener.Modified() 141 | 142 | def remove(self, elem): 143 | """Removes an item from the list. Similar to list.remove().""" 144 | self._values.remove(elem) 145 | self._message_listener.Modified() 146 | 147 | def __setitem__(self, key, value): 148 | """Sets the item on the specified position.""" 149 | self._type_checker.CheckValue(value) 150 | self._values[key] = value 151 | self._message_listener.Modified() 152 | 153 | def __getslice__(self, start, stop): 154 | """Retrieves the subset of items from between the specified indices.""" 155 | return self._values[start:stop] 156 | 157 | def __setslice__(self, start, stop, values): 158 | """Sets the subset of items from between the specified indices.""" 159 | new_values = [] 160 | for value in values: 161 | self._type_checker.CheckValue(value) 162 | new_values.append(value) 163 | self._values[start:stop] = new_values 164 | self._message_listener.Modified() 165 | 166 | def __delitem__(self, key): 167 | """Deletes the item at the specified position.""" 168 | del self._values[key] 169 | self._message_listener.Modified() 170 | 171 | def __delslice__(self, start, stop): 172 | """Deletes the subset of items from between the specified indices.""" 173 | del self._values[start:stop] 174 | self._message_listener.Modified() 175 | 176 | def __eq__(self, other): 177 | """Compares the current instance with another one.""" 178 | if self is other: 179 | return True 180 | # Special case for the same type which should be common and fast. 181 | if isinstance(other, self.__class__): 182 | return other._values == self._values 183 | # We are presumably comparing against some other sequence type. 184 | return other == self._values 185 | 186 | 187 | class RepeatedCompositeFieldContainer(BaseContainer): 188 | 189 | """Simple, list-like container for holding repeated composite fields.""" 190 | 191 | # Disallows assignment to other attributes. 192 | __slots__ = ['_message_descriptor'] 193 | 194 | def __init__(self, message_listener, message_descriptor): 195 | """ 196 | Note that we pass in a descriptor instead of the generated directly, 197 | since at the time we construct a _RepeatedCompositeFieldContainer we 198 | haven't yet necessarily initialized the type that will be contained in the 199 | container. 200 | 201 | Args: 202 | message_listener: A MessageListener implementation. 203 | The RepeatedCompositeFieldContainer will call this object's 204 | Modified() method when it is modified. 205 | message_descriptor: A Descriptor instance describing the protocol type 206 | that should be present in this container. We'll use the 207 | _concrete_class field of this descriptor when the client calls add(). 208 | """ 209 | super(RepeatedCompositeFieldContainer, self).__init__(message_listener) 210 | self._message_descriptor = message_descriptor 211 | 212 | def add(self, **kwargs): 213 | """Adds a new element at the end of the list and returns it. Keyword 214 | arguments may be used to initialize the element. 215 | """ 216 | new_element = self._message_descriptor._concrete_class(**kwargs) 217 | new_element._SetListener(self._message_listener) 218 | self._values.append(new_element) 219 | if not self._message_listener.dirty: 220 | self._message_listener.Modified() 221 | return new_element 222 | 223 | def extend(self, elem_seq): 224 | """Extends by appending the given sequence of elements of the same type 225 | as this one, copying each individual message. 226 | """ 227 | message_class = self._message_descriptor._concrete_class 228 | listener = self._message_listener 229 | values = self._values 230 | for message in elem_seq: 231 | new_element = message_class() 232 | new_element._SetListener(listener) 233 | new_element.MergeFrom(message) 234 | values.append(new_element) 235 | listener.Modified() 236 | 237 | def MergeFrom(self, other): 238 | """Appends the contents of another repeated field of the same type to this 239 | one, copying each individual message. 240 | """ 241 | self.extend(other._values) 242 | 243 | def remove(self, elem): 244 | """Removes an item from the list. Similar to list.remove().""" 245 | self._values.remove(elem) 246 | self._message_listener.Modified() 247 | 248 | def __getslice__(self, start, stop): 249 | """Retrieves the subset of items from between the specified indices.""" 250 | return self._values[start:stop] 251 | 252 | def __delitem__(self, key): 253 | """Deletes the item at the specified position.""" 254 | del self._values[key] 255 | self._message_listener.Modified() 256 | 257 | def __delslice__(self, start, stop): 258 | """Deletes the subset of items from between the specified indices.""" 259 | del self._values[start:stop] 260 | self._message_listener.Modified() 261 | 262 | def __eq__(self, other): 263 | """Compares the current instance with another one.""" 264 | if self is other: 265 | return True 266 | if not isinstance(other, self.__class__): 267 | raise TypeError('Can only compare repeated composite fields against ' 268 | 'other repeated composite fields.') 269 | return self._values == other._values 270 | -------------------------------------------------------------------------------- /google/protobuf/internal/cpp_message.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Contains helper functions used to create protocol message classes from 32 | Descriptor objects at runtime backed by the protocol buffer C++ API. 33 | """ 34 | 35 | __author__ = 'petar@google.com (Petar Petrov)' 36 | 37 | import copy_reg 38 | import operator 39 | from google.protobuf.internal import _net_proto2___python 40 | from google.protobuf.internal import enum_type_wrapper 41 | from google.protobuf import message 42 | 43 | 44 | _LABEL_REPEATED = _net_proto2___python.LABEL_REPEATED 45 | _LABEL_OPTIONAL = _net_proto2___python.LABEL_OPTIONAL 46 | _CPPTYPE_MESSAGE = _net_proto2___python.CPPTYPE_MESSAGE 47 | _TYPE_MESSAGE = _net_proto2___python.TYPE_MESSAGE 48 | 49 | 50 | def GetDescriptorPool(): 51 | """Creates a new DescriptorPool C++ object.""" 52 | return _net_proto2___python.NewCDescriptorPool() 53 | 54 | 55 | _pool = GetDescriptorPool() 56 | 57 | 58 | def GetFieldDescriptor(full_field_name): 59 | """Searches for a field descriptor given a full field name.""" 60 | return _pool.FindFieldByName(full_field_name) 61 | 62 | 63 | def BuildFile(content): 64 | """Registers a new proto file in the underlying C++ descriptor pool.""" 65 | _net_proto2___python.BuildFile(content) 66 | 67 | 68 | def GetExtensionDescriptor(full_extension_name): 69 | """Searches for extension descriptor given a full field name.""" 70 | return _pool.FindExtensionByName(full_extension_name) 71 | 72 | 73 | def NewCMessage(full_message_name): 74 | """Creates a new C++ protocol message by its name.""" 75 | return _net_proto2___python.NewCMessage(full_message_name) 76 | 77 | 78 | def ScalarProperty(cdescriptor): 79 | """Returns a scalar property for the given descriptor.""" 80 | 81 | def Getter(self): 82 | return self._cmsg.GetScalar(cdescriptor) 83 | 84 | def Setter(self, value): 85 | self._cmsg.SetScalar(cdescriptor, value) 86 | 87 | return property(Getter, Setter) 88 | 89 | 90 | def CompositeProperty(cdescriptor, message_type): 91 | """Returns a Python property the given composite field.""" 92 | 93 | def Getter(self): 94 | sub_message = self._composite_fields.get(cdescriptor.name, None) 95 | if sub_message is None: 96 | cmessage = self._cmsg.NewSubMessage(cdescriptor) 97 | sub_message = message_type._concrete_class(__cmessage=cmessage) 98 | self._composite_fields[cdescriptor.name] = sub_message 99 | return sub_message 100 | 101 | return property(Getter) 102 | 103 | 104 | class RepeatedScalarContainer(object): 105 | """Container for repeated scalar fields.""" 106 | 107 | __slots__ = ['_message', '_cfield_descriptor', '_cmsg'] 108 | 109 | def __init__(self, msg, cfield_descriptor): 110 | self._message = msg 111 | self._cmsg = msg._cmsg 112 | self._cfield_descriptor = cfield_descriptor 113 | 114 | def append(self, value): 115 | self._cmsg.AddRepeatedScalar( 116 | self._cfield_descriptor, value) 117 | 118 | def extend(self, sequence): 119 | for element in sequence: 120 | self.append(element) 121 | 122 | def insert(self, key, value): 123 | values = self[slice(None, None, None)] 124 | values.insert(key, value) 125 | self._cmsg.AssignRepeatedScalar(self._cfield_descriptor, values) 126 | 127 | def remove(self, value): 128 | values = self[slice(None, None, None)] 129 | values.remove(value) 130 | self._cmsg.AssignRepeatedScalar(self._cfield_descriptor, values) 131 | 132 | def __setitem__(self, key, value): 133 | values = self[slice(None, None, None)] 134 | values[key] = value 135 | self._cmsg.AssignRepeatedScalar(self._cfield_descriptor, values) 136 | 137 | def __getitem__(self, key): 138 | return self._cmsg.GetRepeatedScalar(self._cfield_descriptor, key) 139 | 140 | def __delitem__(self, key): 141 | self._cmsg.DeleteRepeatedField(self._cfield_descriptor, key) 142 | 143 | def __len__(self): 144 | return len(self[slice(None, None, None)]) 145 | 146 | def __eq__(self, other): 147 | if self is other: 148 | return True 149 | if not operator.isSequenceType(other): 150 | raise TypeError( 151 | 'Can only compare repeated scalar fields against sequences.') 152 | # We are presumably comparing against some other sequence type. 153 | return other == self[slice(None, None, None)] 154 | 155 | def __ne__(self, other): 156 | return not self == other 157 | 158 | def __hash__(self): 159 | raise TypeError('unhashable object') 160 | 161 | def sort(self, *args, **kwargs): 162 | # Maintain compatibility with the previous interface. 163 | if 'sort_function' in kwargs: 164 | kwargs['cmp'] = kwargs.pop('sort_function') 165 | self._cmsg.AssignRepeatedScalar(self._cfield_descriptor, 166 | sorted(self, *args, **kwargs)) 167 | 168 | 169 | def RepeatedScalarProperty(cdescriptor): 170 | """Returns a Python property the given repeated scalar field.""" 171 | 172 | def Getter(self): 173 | container = self._composite_fields.get(cdescriptor.name, None) 174 | if container is None: 175 | container = RepeatedScalarContainer(self, cdescriptor) 176 | self._composite_fields[cdescriptor.name] = container 177 | return container 178 | 179 | def Setter(self, new_value): 180 | raise AttributeError('Assignment not allowed to repeated field ' 181 | '"%s" in protocol message object.' % cdescriptor.name) 182 | 183 | doc = 'Magic attribute generated for "%s" proto field.' % cdescriptor.name 184 | return property(Getter, Setter, doc=doc) 185 | 186 | 187 | class RepeatedCompositeContainer(object): 188 | """Container for repeated composite fields.""" 189 | 190 | __slots__ = ['_message', '_subclass', '_cfield_descriptor', '_cmsg'] 191 | 192 | def __init__(self, msg, cfield_descriptor, subclass): 193 | self._message = msg 194 | self._cmsg = msg._cmsg 195 | self._subclass = subclass 196 | self._cfield_descriptor = cfield_descriptor 197 | 198 | def add(self, **kwargs): 199 | cmessage = self._cmsg.AddMessage(self._cfield_descriptor) 200 | return self._subclass(__cmessage=cmessage, __owner=self._message, **kwargs) 201 | 202 | def extend(self, elem_seq): 203 | """Extends by appending the given sequence of elements of the same type 204 | as this one, copying each individual message. 205 | """ 206 | for message in elem_seq: 207 | self.add().MergeFrom(message) 208 | 209 | def remove(self, value): 210 | # TODO(protocol-devel): This is inefficient as it needs to generate a 211 | # message pointer for each message only to do index(). Move this to a C++ 212 | # extension function. 213 | self.__delitem__(self[slice(None, None, None)].index(value)) 214 | 215 | def MergeFrom(self, other): 216 | for message in other[:]: 217 | self.add().MergeFrom(message) 218 | 219 | def __getitem__(self, key): 220 | cmessages = self._cmsg.GetRepeatedMessage( 221 | self._cfield_descriptor, key) 222 | subclass = self._subclass 223 | if not isinstance(cmessages, list): 224 | return subclass(__cmessage=cmessages, __owner=self._message) 225 | 226 | return [subclass(__cmessage=m, __owner=self._message) for m in cmessages] 227 | 228 | def __delitem__(self, key): 229 | self._cmsg.DeleteRepeatedField( 230 | self._cfield_descriptor, key) 231 | 232 | def __len__(self): 233 | return self._cmsg.FieldLength(self._cfield_descriptor) 234 | 235 | def __eq__(self, other): 236 | """Compares the current instance with another one.""" 237 | if self is other: 238 | return True 239 | if not isinstance(other, self.__class__): 240 | raise TypeError('Can only compare repeated composite fields against ' 241 | 'other repeated composite fields.') 242 | messages = self[slice(None, None, None)] 243 | other_messages = other[slice(None, None, None)] 244 | return messages == other_messages 245 | 246 | def __hash__(self): 247 | raise TypeError('unhashable object') 248 | 249 | def sort(self, cmp=None, key=None, reverse=False, **kwargs): 250 | # Maintain compatibility with the old interface. 251 | if cmp is None and 'sort_function' in kwargs: 252 | cmp = kwargs.pop('sort_function') 253 | 254 | # The cmp function, if provided, is passed the results of the key function, 255 | # so we only need to wrap one of them. 256 | if key is None: 257 | index_key = self.__getitem__ 258 | else: 259 | index_key = lambda i: key(self[i]) 260 | 261 | # Sort the list of current indexes by the underlying object. 262 | indexes = range(len(self)) 263 | indexes.sort(cmp=cmp, key=index_key, reverse=reverse) 264 | 265 | # Apply the transposition. 266 | for dest, src in enumerate(indexes): 267 | if dest == src: 268 | continue 269 | self._cmsg.SwapRepeatedFieldElements(self._cfield_descriptor, dest, src) 270 | # Don't swap the same value twice. 271 | indexes[src] = src 272 | 273 | 274 | def RepeatedCompositeProperty(cdescriptor, message_type): 275 | """Returns a Python property for the given repeated composite field.""" 276 | 277 | def Getter(self): 278 | container = self._composite_fields.get(cdescriptor.name, None) 279 | if container is None: 280 | container = RepeatedCompositeContainer( 281 | self, cdescriptor, message_type._concrete_class) 282 | self._composite_fields[cdescriptor.name] = container 283 | return container 284 | 285 | def Setter(self, new_value): 286 | raise AttributeError('Assignment not allowed to repeated field ' 287 | '"%s" in protocol message object.' % cdescriptor.name) 288 | 289 | doc = 'Magic attribute generated for "%s" proto field.' % cdescriptor.name 290 | return property(Getter, Setter, doc=doc) 291 | 292 | 293 | class ExtensionDict(object): 294 | """Extension dictionary added to each protocol message.""" 295 | 296 | def __init__(self, msg): 297 | self._message = msg 298 | self._cmsg = msg._cmsg 299 | self._values = {} 300 | 301 | def __setitem__(self, extension, value): 302 | from google.protobuf import descriptor 303 | if not isinstance(extension, descriptor.FieldDescriptor): 304 | raise KeyError('Bad extension %r.' % (extension,)) 305 | cdescriptor = extension._cdescriptor 306 | if (cdescriptor.label != _LABEL_OPTIONAL or 307 | cdescriptor.cpp_type == _CPPTYPE_MESSAGE): 308 | raise TypeError('Extension %r is repeated and/or a composite type.' % ( 309 | extension.full_name,)) 310 | self._cmsg.SetScalar(cdescriptor, value) 311 | self._values[extension] = value 312 | 313 | def __getitem__(self, extension): 314 | from google.protobuf import descriptor 315 | if not isinstance(extension, descriptor.FieldDescriptor): 316 | raise KeyError('Bad extension %r.' % (extension,)) 317 | 318 | cdescriptor = extension._cdescriptor 319 | if (cdescriptor.label != _LABEL_REPEATED and 320 | cdescriptor.cpp_type != _CPPTYPE_MESSAGE): 321 | return self._cmsg.GetScalar(cdescriptor) 322 | 323 | ext = self._values.get(extension, None) 324 | if ext is not None: 325 | return ext 326 | 327 | ext = self._CreateNewHandle(extension) 328 | self._values[extension] = ext 329 | return ext 330 | 331 | def ClearExtension(self, extension): 332 | from google.protobuf import descriptor 333 | if not isinstance(extension, descriptor.FieldDescriptor): 334 | raise KeyError('Bad extension %r.' % (extension,)) 335 | self._cmsg.ClearFieldByDescriptor(extension._cdescriptor) 336 | if extension in self._values: 337 | del self._values[extension] 338 | 339 | def HasExtension(self, extension): 340 | from google.protobuf import descriptor 341 | if not isinstance(extension, descriptor.FieldDescriptor): 342 | raise KeyError('Bad extension %r.' % (extension,)) 343 | return self._cmsg.HasFieldByDescriptor(extension._cdescriptor) 344 | 345 | def _FindExtensionByName(self, name): 346 | """Tries to find a known extension with the specified name. 347 | 348 | Args: 349 | name: Extension full name. 350 | 351 | Returns: 352 | Extension field descriptor. 353 | """ 354 | return self._message._extensions_by_name.get(name, None) 355 | 356 | def _CreateNewHandle(self, extension): 357 | cdescriptor = extension._cdescriptor 358 | if (cdescriptor.label != _LABEL_REPEATED and 359 | cdescriptor.cpp_type == _CPPTYPE_MESSAGE): 360 | cmessage = self._cmsg.NewSubMessage(cdescriptor) 361 | return extension.message_type._concrete_class(__cmessage=cmessage) 362 | 363 | if cdescriptor.label == _LABEL_REPEATED: 364 | if cdescriptor.cpp_type == _CPPTYPE_MESSAGE: 365 | return RepeatedCompositeContainer( 366 | self._message, cdescriptor, extension.message_type._concrete_class) 367 | else: 368 | return RepeatedScalarContainer(self._message, cdescriptor) 369 | # This shouldn't happen! 370 | assert False 371 | return None 372 | 373 | 374 | def NewMessage(bases, message_descriptor, dictionary): 375 | """Creates a new protocol message *class*.""" 376 | _AddClassAttributesForNestedExtensions(message_descriptor, dictionary) 377 | _AddEnumValues(message_descriptor, dictionary) 378 | _AddDescriptors(message_descriptor, dictionary) 379 | return bases 380 | 381 | 382 | def InitMessage(message_descriptor, cls): 383 | """Constructs a new message instance (called before instance's __init__).""" 384 | cls._extensions_by_name = {} 385 | _AddInitMethod(message_descriptor, cls) 386 | _AddMessageMethods(message_descriptor, cls) 387 | _AddPropertiesForExtensions(message_descriptor, cls) 388 | copy_reg.pickle(cls, lambda obj: (cls, (), obj.__getstate__())) 389 | 390 | 391 | def _AddDescriptors(message_descriptor, dictionary): 392 | """Sets up a new protocol message class dictionary. 393 | 394 | Args: 395 | message_descriptor: A Descriptor instance describing this message type. 396 | dictionary: Class dictionary to which we'll add a '__slots__' entry. 397 | """ 398 | dictionary['__descriptors'] = {} 399 | for field in message_descriptor.fields: 400 | dictionary['__descriptors'][field.name] = GetFieldDescriptor( 401 | field.full_name) 402 | 403 | dictionary['__slots__'] = list(dictionary['__descriptors'].iterkeys()) + [ 404 | '_cmsg', '_owner', '_composite_fields', 'Extensions', '_HACK_REFCOUNTS'] 405 | 406 | 407 | def _AddEnumValues(message_descriptor, dictionary): 408 | """Sets class-level attributes for all enum fields defined in this message. 409 | 410 | Args: 411 | message_descriptor: Descriptor object for this message type. 412 | dictionary: Class dictionary that should be populated. 413 | """ 414 | for enum_type in message_descriptor.enum_types: 415 | dictionary[enum_type.name] = enum_type_wrapper.EnumTypeWrapper(enum_type) 416 | for enum_value in enum_type.values: 417 | dictionary[enum_value.name] = enum_value.number 418 | 419 | 420 | def _AddClassAttributesForNestedExtensions(message_descriptor, dictionary): 421 | """Adds class attributes for the nested extensions.""" 422 | extension_dict = message_descriptor.extensions_by_name 423 | for extension_name, extension_field in extension_dict.iteritems(): 424 | assert extension_name not in dictionary 425 | dictionary[extension_name] = extension_field 426 | 427 | 428 | def _AddInitMethod(message_descriptor, cls): 429 | """Adds an __init__ method to cls.""" 430 | 431 | # Create and attach message field properties to the message class. 432 | # This can be done just once per message class, since property setters and 433 | # getters are passed the message instance. 434 | # This makes message instantiation extremely fast, and at the same time it 435 | # doesn't require the creation of property objects for each message instance, 436 | # which saves a lot of memory. 437 | for field in message_descriptor.fields: 438 | field_cdescriptor = cls.__descriptors[field.name] 439 | if field.label == _LABEL_REPEATED: 440 | if field.cpp_type == _CPPTYPE_MESSAGE: 441 | value = RepeatedCompositeProperty(field_cdescriptor, field.message_type) 442 | else: 443 | value = RepeatedScalarProperty(field_cdescriptor) 444 | elif field.cpp_type == _CPPTYPE_MESSAGE: 445 | value = CompositeProperty(field_cdescriptor, field.message_type) 446 | else: 447 | value = ScalarProperty(field_cdescriptor) 448 | setattr(cls, field.name, value) 449 | 450 | # Attach a constant with the field number. 451 | constant_name = field.name.upper() + '_FIELD_NUMBER' 452 | setattr(cls, constant_name, field.number) 453 | 454 | def Init(self, **kwargs): 455 | """Message constructor.""" 456 | cmessage = kwargs.pop('__cmessage', None) 457 | if cmessage: 458 | self._cmsg = cmessage 459 | else: 460 | self._cmsg = NewCMessage(message_descriptor.full_name) 461 | 462 | # Keep a reference to the owner, as the owner keeps a reference to the 463 | # underlying protocol buffer message. 464 | owner = kwargs.pop('__owner', None) 465 | if owner: 466 | self._owner = owner 467 | 468 | if message_descriptor.is_extendable: 469 | self.Extensions = ExtensionDict(self) 470 | else: 471 | # Reference counting in the C++ code is broken and depends on 472 | # the Extensions reference to keep this object alive during unit 473 | # tests (see b/4856052). Remove this once b/4945904 is fixed. 474 | self._HACK_REFCOUNTS = self 475 | self._composite_fields = {} 476 | 477 | for field_name, field_value in kwargs.iteritems(): 478 | field_cdescriptor = self.__descriptors.get(field_name, None) 479 | if not field_cdescriptor: 480 | raise ValueError('Protocol message has no "%s" field.' % field_name) 481 | if field_cdescriptor.label == _LABEL_REPEATED: 482 | if field_cdescriptor.cpp_type == _CPPTYPE_MESSAGE: 483 | field_name = getattr(self, field_name) 484 | for val in field_value: 485 | field_name.add().MergeFrom(val) 486 | else: 487 | getattr(self, field_name).extend(field_value) 488 | elif field_cdescriptor.cpp_type == _CPPTYPE_MESSAGE: 489 | getattr(self, field_name).MergeFrom(field_value) 490 | else: 491 | setattr(self, field_name, field_value) 492 | 493 | Init.__module__ = None 494 | Init.__doc__ = None 495 | cls.__init__ = Init 496 | 497 | 498 | def _IsMessageSetExtension(field): 499 | """Checks if a field is a message set extension.""" 500 | return (field.is_extension and 501 | field.containing_type.has_options and 502 | field.containing_type.GetOptions().message_set_wire_format and 503 | field.type == _TYPE_MESSAGE and 504 | field.message_type == field.extension_scope and 505 | field.label == _LABEL_OPTIONAL) 506 | 507 | 508 | def _AddMessageMethods(message_descriptor, cls): 509 | """Adds the methods to a protocol message class.""" 510 | if message_descriptor.is_extendable: 511 | 512 | def ClearExtension(self, extension): 513 | self.Extensions.ClearExtension(extension) 514 | 515 | def HasExtension(self, extension): 516 | return self.Extensions.HasExtension(extension) 517 | 518 | def HasField(self, field_name): 519 | return self._cmsg.HasField(field_name) 520 | 521 | def ClearField(self, field_name): 522 | child_cmessage = None 523 | if field_name in self._composite_fields: 524 | child_field = self._composite_fields[field_name] 525 | del self._composite_fields[field_name] 526 | 527 | child_cdescriptor = self.__descriptors[field_name] 528 | # TODO(anuraag): Support clearing repeated message fields as well. 529 | if (child_cdescriptor.label != _LABEL_REPEATED and 530 | child_cdescriptor.cpp_type == _CPPTYPE_MESSAGE): 531 | child_field._owner = None 532 | child_cmessage = child_field._cmsg 533 | 534 | if child_cmessage is not None: 535 | self._cmsg.ClearField(field_name, child_cmessage) 536 | else: 537 | self._cmsg.ClearField(field_name) 538 | 539 | def Clear(self): 540 | cmessages_to_release = [] 541 | for field_name, child_field in self._composite_fields.iteritems(): 542 | child_cdescriptor = self.__descriptors[field_name] 543 | # TODO(anuraag): Support clearing repeated message fields as well. 544 | if (child_cdescriptor.label != _LABEL_REPEATED and 545 | child_cdescriptor.cpp_type == _CPPTYPE_MESSAGE): 546 | child_field._owner = None 547 | cmessages_to_release.append((child_cdescriptor, child_field._cmsg)) 548 | self._composite_fields.clear() 549 | self._cmsg.Clear(cmessages_to_release) 550 | 551 | def IsInitialized(self, errors=None): 552 | if self._cmsg.IsInitialized(): 553 | return True 554 | if errors is not None: 555 | errors.extend(self.FindInitializationErrors()); 556 | return False 557 | 558 | def SerializeToString(self): 559 | if not self.IsInitialized(): 560 | raise message.EncodeError( 561 | 'Message %s is missing required fields: %s' % ( 562 | self._cmsg.full_name, ','.join(self.FindInitializationErrors()))) 563 | return self._cmsg.SerializeToString() 564 | 565 | def SerializePartialToString(self): 566 | return self._cmsg.SerializePartialToString() 567 | 568 | def ParseFromString(self, serialized): 569 | self.Clear() 570 | self.MergeFromString(serialized) 571 | 572 | def MergeFromString(self, serialized): 573 | byte_size = self._cmsg.MergeFromString(serialized) 574 | if byte_size < 0: 575 | raise message.DecodeError('Unable to merge from string.') 576 | return byte_size 577 | 578 | def MergeFrom(self, msg): 579 | if not isinstance(msg, cls): 580 | raise TypeError( 581 | "Parameter to MergeFrom() must be instance of same class: " 582 | "expected %s got %s." % (cls.__name__, type(msg).__name__)) 583 | self._cmsg.MergeFrom(msg._cmsg) 584 | 585 | def CopyFrom(self, msg): 586 | self._cmsg.CopyFrom(msg._cmsg) 587 | 588 | def ByteSize(self): 589 | return self._cmsg.ByteSize() 590 | 591 | def SetInParent(self): 592 | return self._cmsg.SetInParent() 593 | 594 | def ListFields(self): 595 | all_fields = [] 596 | field_list = self._cmsg.ListFields() 597 | fields_by_name = cls.DESCRIPTOR.fields_by_name 598 | for is_extension, field_name in field_list: 599 | if is_extension: 600 | extension = cls._extensions_by_name[field_name] 601 | all_fields.append((extension, self.Extensions[extension])) 602 | else: 603 | field_descriptor = fields_by_name[field_name] 604 | all_fields.append( 605 | (field_descriptor, getattr(self, field_name))) 606 | all_fields.sort(key=lambda item: item[0].number) 607 | return all_fields 608 | 609 | def FindInitializationErrors(self): 610 | return self._cmsg.FindInitializationErrors() 611 | 612 | def __str__(self): 613 | return self._cmsg.DebugString() 614 | 615 | def __eq__(self, other): 616 | if self is other: 617 | return True 618 | if not isinstance(other, self.__class__): 619 | return False 620 | return self.ListFields() == other.ListFields() 621 | 622 | def __ne__(self, other): 623 | return not self == other 624 | 625 | def __hash__(self): 626 | raise TypeError('unhashable object') 627 | 628 | def __unicode__(self): 629 | # Lazy import to prevent circular import when text_format imports this file. 630 | from google.protobuf import text_format 631 | return text_format.MessageToString(self, as_utf8=True).decode('utf-8') 632 | 633 | # Attach the local methods to the message class. 634 | for key, value in locals().copy().iteritems(): 635 | if key not in ('key', 'value', '__builtins__', '__name__', '__doc__'): 636 | setattr(cls, key, value) 637 | 638 | # Static methods: 639 | 640 | def RegisterExtension(extension_handle): 641 | extension_handle.containing_type = cls.DESCRIPTOR 642 | cls._extensions_by_name[extension_handle.full_name] = extension_handle 643 | 644 | if _IsMessageSetExtension(extension_handle): 645 | # MessageSet extension. Also register under type name. 646 | cls._extensions_by_name[ 647 | extension_handle.message_type.full_name] = extension_handle 648 | cls.RegisterExtension = staticmethod(RegisterExtension) 649 | 650 | def FromString(string): 651 | msg = cls() 652 | msg.MergeFromString(string) 653 | return msg 654 | cls.FromString = staticmethod(FromString) 655 | 656 | 657 | 658 | def _AddPropertiesForExtensions(message_descriptor, cls): 659 | """Adds properties for all fields in this protocol message type.""" 660 | extension_dict = message_descriptor.extensions_by_name 661 | for extension_name, extension_field in extension_dict.iteritems(): 662 | constant_name = extension_name.upper() + '_FIELD_NUMBER' 663 | setattr(cls, constant_name, extension_field.number) 664 | -------------------------------------------------------------------------------- /google/protobuf/internal/enum_type_wrapper.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """A simple wrapper around enum types to expose utility functions. 32 | 33 | Instances are created as properties with the same name as the enum they wrap 34 | on proto classes. For usage, see: 35 | reflection_test.py 36 | """ 37 | 38 | __author__ = 'rabsatt@google.com (Kevin Rabsatt)' 39 | 40 | 41 | class EnumTypeWrapper(object): 42 | """A utility for finding the names of enum values.""" 43 | 44 | DESCRIPTOR = None 45 | 46 | def __init__(self, enum_type): 47 | """Inits EnumTypeWrapper with an EnumDescriptor.""" 48 | self._enum_type = enum_type 49 | self.DESCRIPTOR = enum_type; 50 | 51 | def Name(self, number): 52 | """Returns a string containing the name of an enum value.""" 53 | if number in self._enum_type.values_by_number: 54 | return self._enum_type.values_by_number[number].name 55 | raise ValueError('Enum %s has no name defined for value %d' % ( 56 | self._enum_type.name, number)) 57 | 58 | def Value(self, name): 59 | """Returns the value coresponding to the given enum name.""" 60 | if name in self._enum_type.values_by_name: 61 | return self._enum_type.values_by_name[name].number 62 | raise ValueError('Enum %s has no value defined for name %s' % ( 63 | self._enum_type.name, name)) 64 | 65 | def keys(self): 66 | """Return a list of the string names in the enum. 67 | 68 | These are returned in the order they were defined in the .proto file. 69 | """ 70 | 71 | return [value_descriptor.name 72 | for value_descriptor in self._enum_type.values] 73 | 74 | def values(self): 75 | """Return a list of the integer values in the enum. 76 | 77 | These are returned in the order they were defined in the .proto file. 78 | """ 79 | 80 | return [value_descriptor.number 81 | for value_descriptor in self._enum_type.values] 82 | 83 | def items(self): 84 | """Return a list of the (name, value) pairs of the enum. 85 | 86 | These are returned in the order they were defined in the .proto file. 87 | """ 88 | return [(value_descriptor.name, value_descriptor.number) 89 | for value_descriptor in self._enum_type.values] 90 | -------------------------------------------------------------------------------- /google/protobuf/internal/message_listener.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Defines a listener interface for observing certain 32 | state transitions on Message objects. 33 | 34 | Also defines a null implementation of this interface. 35 | """ 36 | 37 | __author__ = 'robinson@google.com (Will Robinson)' 38 | 39 | 40 | class MessageListener(object): 41 | 42 | """Listens for modifications made to a message. Meant to be registered via 43 | Message._SetListener(). 44 | 45 | Attributes: 46 | dirty: If True, then calling Modified() would be a no-op. This can be 47 | used to avoid these calls entirely in the common case. 48 | """ 49 | 50 | def Modified(self): 51 | """Called every time the message is modified in such a way that the parent 52 | message may need to be updated. This currently means either: 53 | (a) The message was modified for the first time, so the parent message 54 | should henceforth mark the message as present. 55 | (b) The message's cached byte size became dirty -- i.e. the message was 56 | modified for the first time after a previous call to ByteSize(). 57 | Therefore the parent should also mark its byte size as dirty. 58 | Note that (a) implies (b), since new objects start out with a client cached 59 | size (zero). However, we document (a) explicitly because it is important. 60 | 61 | Modified() will *only* be called in response to one of these two events -- 62 | not every time the sub-message is modified. 63 | 64 | Note that if the listener's |dirty| attribute is true, then calling 65 | Modified at the moment would be a no-op, so it can be skipped. Performance- 66 | sensitive callers should check this attribute directly before calling since 67 | it will be true most of the time. 68 | """ 69 | 70 | raise NotImplementedError 71 | 72 | 73 | class NullMessageListener(object): 74 | 75 | """No-op MessageListener implementation.""" 76 | 77 | def Modified(self): 78 | pass 79 | -------------------------------------------------------------------------------- /google/protobuf/internal/type_checkers.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Provides type checking routines. 32 | 33 | This module defines type checking utilities in the forms of dictionaries: 34 | 35 | VALUE_CHECKERS: A dictionary of field types and a value validation object. 36 | TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing 37 | function. 38 | TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization 39 | function. 40 | FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their 41 | coresponding wire types. 42 | TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization 43 | function. 44 | """ 45 | 46 | __author__ = 'robinson@google.com (Will Robinson)' 47 | 48 | from google.protobuf.internal import decoder 49 | from google.protobuf.internal import encoder 50 | from google.protobuf.internal import wire_format 51 | from google.protobuf import descriptor 52 | 53 | _FieldDescriptor = descriptor.FieldDescriptor 54 | 55 | 56 | def GetTypeChecker(cpp_type, field_type): 57 | """Returns a type checker for a message field of the specified types. 58 | 59 | Args: 60 | cpp_type: C++ type of the field (see descriptor.py). 61 | field_type: Protocol message field type (see descriptor.py). 62 | 63 | Returns: 64 | An instance of TypeChecker which can be used to verify the types 65 | of values assigned to a field of the specified type. 66 | """ 67 | if (cpp_type == _FieldDescriptor.CPPTYPE_STRING and 68 | field_type == _FieldDescriptor.TYPE_STRING): 69 | return UnicodeValueChecker() 70 | return _VALUE_CHECKERS[cpp_type] 71 | 72 | 73 | # None of the typecheckers below make any attempt to guard against people 74 | # subclassing builtin types and doing weird things. We're not trying to 75 | # protect against malicious clients here, just people accidentally shooting 76 | # themselves in the foot in obvious ways. 77 | 78 | class TypeChecker(object): 79 | 80 | """Type checker used to catch type errors as early as possible 81 | when the client is setting scalar fields in protocol messages. 82 | """ 83 | 84 | def __init__(self, *acceptable_types): 85 | self._acceptable_types = acceptable_types 86 | 87 | def CheckValue(self, proposed_value): 88 | if not isinstance(proposed_value, self._acceptable_types): 89 | message = ('%.1024r has type %s, but expected one of: %s' % 90 | (proposed_value, type(proposed_value), self._acceptable_types)) 91 | raise TypeError(message) 92 | 93 | 94 | # IntValueChecker and its subclasses perform integer type-checks 95 | # and bounds-checks. 96 | class IntValueChecker(object): 97 | 98 | """Checker used for integer fields. Performs type-check and range check.""" 99 | 100 | def CheckValue(self, proposed_value): 101 | if not isinstance(proposed_value, (int, long)): 102 | message = ('%.1024r has type %s, but expected one of: %s' % 103 | (proposed_value, type(proposed_value), (int, long))) 104 | raise TypeError(message) 105 | if not self._MIN <= proposed_value <= self._MAX: 106 | raise ValueError('Value out of range: %d' % proposed_value) 107 | 108 | 109 | class UnicodeValueChecker(object): 110 | 111 | """Checker used for string fields.""" 112 | 113 | def CheckValue(self, proposed_value): 114 | if not isinstance(proposed_value, (str, unicode)): 115 | message = ('%.1024r has type %s, but expected one of: %s' % 116 | (proposed_value, type(proposed_value), (str, unicode))) 117 | raise TypeError(message) 118 | 119 | # If the value is of type 'str' make sure that it is in 7-bit ASCII 120 | # encoding. 121 | if isinstance(proposed_value, str): 122 | try: 123 | unicode(proposed_value, 'ascii') 124 | except UnicodeDecodeError: 125 | raise ValueError('%.1024r has type str, but isn\'t in 7-bit ASCII ' 126 | 'encoding. Non-ASCII strings must be converted to ' 127 | 'unicode objects before being added.' % 128 | (proposed_value)) 129 | 130 | 131 | class Int32ValueChecker(IntValueChecker): 132 | # We're sure to use ints instead of longs here since comparison may be more 133 | # efficient. 134 | _MIN = -2147483648 135 | _MAX = 2147483647 136 | 137 | 138 | class Uint32ValueChecker(IntValueChecker): 139 | _MIN = 0 140 | _MAX = (1 << 32) - 1 141 | 142 | 143 | class Int64ValueChecker(IntValueChecker): 144 | _MIN = -(1 << 63) 145 | _MAX = (1 << 63) - 1 146 | 147 | 148 | class Uint64ValueChecker(IntValueChecker): 149 | _MIN = 0 150 | _MAX = (1 << 64) - 1 151 | 152 | 153 | # Type-checkers for all scalar CPPTYPEs. 154 | _VALUE_CHECKERS = { 155 | _FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(), 156 | _FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(), 157 | _FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(), 158 | _FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(), 159 | _FieldDescriptor.CPPTYPE_DOUBLE: TypeChecker( 160 | float, int, long), 161 | _FieldDescriptor.CPPTYPE_FLOAT: TypeChecker( 162 | float, int, long), 163 | _FieldDescriptor.CPPTYPE_BOOL: TypeChecker(bool, int), 164 | _FieldDescriptor.CPPTYPE_ENUM: Int32ValueChecker(), 165 | _FieldDescriptor.CPPTYPE_STRING: TypeChecker(str), 166 | } 167 | 168 | 169 | # Map from field type to a function F, such that F(field_num, value) 170 | # gives the total byte size for a value of the given type. This 171 | # byte size includes tag information and any other additional space 172 | # associated with serializing "value". 173 | TYPE_TO_BYTE_SIZE_FN = { 174 | _FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize, 175 | _FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize, 176 | _FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize, 177 | _FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize, 178 | _FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize, 179 | _FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize, 180 | _FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize, 181 | _FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize, 182 | _FieldDescriptor.TYPE_STRING: wire_format.StringByteSize, 183 | _FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize, 184 | _FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize, 185 | _FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize, 186 | _FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize, 187 | _FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize, 188 | _FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize, 189 | _FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize, 190 | _FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize, 191 | _FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize 192 | } 193 | 194 | 195 | # Maps from field types to encoder constructors. 196 | TYPE_TO_ENCODER = { 197 | _FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder, 198 | _FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder, 199 | _FieldDescriptor.TYPE_INT64: encoder.Int64Encoder, 200 | _FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder, 201 | _FieldDescriptor.TYPE_INT32: encoder.Int32Encoder, 202 | _FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder, 203 | _FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder, 204 | _FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder, 205 | _FieldDescriptor.TYPE_STRING: encoder.StringEncoder, 206 | _FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder, 207 | _FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder, 208 | _FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder, 209 | _FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder, 210 | _FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder, 211 | _FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder, 212 | _FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder, 213 | _FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder, 214 | _FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder, 215 | } 216 | 217 | 218 | # Maps from field types to sizer constructors. 219 | TYPE_TO_SIZER = { 220 | _FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer, 221 | _FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer, 222 | _FieldDescriptor.TYPE_INT64: encoder.Int64Sizer, 223 | _FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer, 224 | _FieldDescriptor.TYPE_INT32: encoder.Int32Sizer, 225 | _FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer, 226 | _FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer, 227 | _FieldDescriptor.TYPE_BOOL: encoder.BoolSizer, 228 | _FieldDescriptor.TYPE_STRING: encoder.StringSizer, 229 | _FieldDescriptor.TYPE_GROUP: encoder.GroupSizer, 230 | _FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer, 231 | _FieldDescriptor.TYPE_BYTES: encoder.BytesSizer, 232 | _FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer, 233 | _FieldDescriptor.TYPE_ENUM: encoder.EnumSizer, 234 | _FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer, 235 | _FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer, 236 | _FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer, 237 | _FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer, 238 | } 239 | 240 | 241 | # Maps from field type to a decoder constructor. 242 | TYPE_TO_DECODER = { 243 | _FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder, 244 | _FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder, 245 | _FieldDescriptor.TYPE_INT64: decoder.Int64Decoder, 246 | _FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder, 247 | _FieldDescriptor.TYPE_INT32: decoder.Int32Decoder, 248 | _FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder, 249 | _FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder, 250 | _FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder, 251 | _FieldDescriptor.TYPE_STRING: decoder.StringDecoder, 252 | _FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder, 253 | _FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder, 254 | _FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder, 255 | _FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder, 256 | _FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder, 257 | _FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder, 258 | _FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder, 259 | _FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder, 260 | _FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder, 261 | } 262 | 263 | # Maps from field type to expected wiretype. 264 | FIELD_TYPE_TO_WIRE_TYPE = { 265 | _FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64, 266 | _FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32, 267 | _FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT, 268 | _FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT, 269 | _FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT, 270 | _FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64, 271 | _FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32, 272 | _FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT, 273 | _FieldDescriptor.TYPE_STRING: 274 | wire_format.WIRETYPE_LENGTH_DELIMITED, 275 | _FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP, 276 | _FieldDescriptor.TYPE_MESSAGE: 277 | wire_format.WIRETYPE_LENGTH_DELIMITED, 278 | _FieldDescriptor.TYPE_BYTES: 279 | wire_format.WIRETYPE_LENGTH_DELIMITED, 280 | _FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT, 281 | _FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT, 282 | _FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32, 283 | _FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64, 284 | _FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT, 285 | _FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT, 286 | } 287 | -------------------------------------------------------------------------------- /google/protobuf/internal/wire_format.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Constants and static functions to support protocol buffer wire format.""" 32 | 33 | __author__ = 'robinson@google.com (Will Robinson)' 34 | 35 | import struct 36 | from google.protobuf import descriptor 37 | from google.protobuf import message 38 | 39 | 40 | TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag. 41 | TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7 42 | 43 | # These numbers identify the wire type of a protocol buffer value. 44 | # We use the least-significant TAG_TYPE_BITS bits of the varint-encoded 45 | # tag-and-type to store one of these WIRETYPE_* constants. 46 | # These values must match WireType enum in google/protobuf/wire_format.h. 47 | WIRETYPE_VARINT = 0 48 | WIRETYPE_FIXED64 = 1 49 | WIRETYPE_LENGTH_DELIMITED = 2 50 | WIRETYPE_START_GROUP = 3 51 | WIRETYPE_END_GROUP = 4 52 | WIRETYPE_FIXED32 = 5 53 | _WIRETYPE_MAX = 5 54 | 55 | 56 | # Bounds for various integer types. 57 | INT32_MAX = int((1 << 31) - 1) 58 | INT32_MIN = int(-(1 << 31)) 59 | UINT32_MAX = (1 << 32) - 1 60 | 61 | INT64_MAX = (1 << 63) - 1 62 | INT64_MIN = -(1 << 63) 63 | UINT64_MAX = (1 << 64) - 1 64 | 65 | # "struct" format strings that will encode/decode the specified formats. 66 | FORMAT_UINT32_LITTLE_ENDIAN = '> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK) 98 | 99 | 100 | def ZigZagEncode(value): 101 | """ZigZag Transform: Encodes signed integers so that they can be 102 | effectively used with varint encoding. See wire_format.h for 103 | more details. 104 | """ 105 | if value >= 0: 106 | return value << 1 107 | return (value << 1) ^ (~0) 108 | 109 | 110 | def ZigZagDecode(value): 111 | """Inverse of ZigZagEncode().""" 112 | if not value & 0x1: 113 | return value >> 1 114 | return (value >> 1) ^ (~0) 115 | 116 | 117 | 118 | # The *ByteSize() functions below return the number of bytes required to 119 | # serialize "field number + type" information and then serialize the value. 120 | 121 | 122 | def Int32ByteSize(field_number, int32): 123 | return Int64ByteSize(field_number, int32) 124 | 125 | 126 | def Int32ByteSizeNoTag(int32): 127 | return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32) 128 | 129 | 130 | def Int64ByteSize(field_number, int64): 131 | # Have to convert to uint before calling UInt64ByteSize(). 132 | return UInt64ByteSize(field_number, 0xffffffffffffffff & int64) 133 | 134 | 135 | def UInt32ByteSize(field_number, uint32): 136 | return UInt64ByteSize(field_number, uint32) 137 | 138 | 139 | def UInt64ByteSize(field_number, uint64): 140 | return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64) 141 | 142 | 143 | def SInt32ByteSize(field_number, int32): 144 | return UInt32ByteSize(field_number, ZigZagEncode(int32)) 145 | 146 | 147 | def SInt64ByteSize(field_number, int64): 148 | return UInt64ByteSize(field_number, ZigZagEncode(int64)) 149 | 150 | 151 | def Fixed32ByteSize(field_number, fixed32): 152 | return TagByteSize(field_number) + 4 153 | 154 | 155 | def Fixed64ByteSize(field_number, fixed64): 156 | return TagByteSize(field_number) + 8 157 | 158 | 159 | def SFixed32ByteSize(field_number, sfixed32): 160 | return TagByteSize(field_number) + 4 161 | 162 | 163 | def SFixed64ByteSize(field_number, sfixed64): 164 | return TagByteSize(field_number) + 8 165 | 166 | 167 | def FloatByteSize(field_number, flt): 168 | return TagByteSize(field_number) + 4 169 | 170 | 171 | def DoubleByteSize(field_number, double): 172 | return TagByteSize(field_number) + 8 173 | 174 | 175 | def BoolByteSize(field_number, b): 176 | return TagByteSize(field_number) + 1 177 | 178 | 179 | def EnumByteSize(field_number, enum): 180 | return UInt32ByteSize(field_number, enum) 181 | 182 | 183 | def StringByteSize(field_number, string): 184 | return BytesByteSize(field_number, string.encode('utf-8')) 185 | 186 | 187 | def BytesByteSize(field_number, b): 188 | return (TagByteSize(field_number) 189 | + _VarUInt64ByteSizeNoTag(len(b)) 190 | + len(b)) 191 | 192 | 193 | def GroupByteSize(field_number, message): 194 | return (2 * TagByteSize(field_number) # START and END group. 195 | + message.ByteSize()) 196 | 197 | 198 | def MessageByteSize(field_number, message): 199 | return (TagByteSize(field_number) 200 | + _VarUInt64ByteSizeNoTag(message.ByteSize()) 201 | + message.ByteSize()) 202 | 203 | 204 | def MessageSetItemByteSize(field_number, msg): 205 | # First compute the sizes of the tags. 206 | # There are 2 tags for the beginning and ending of the repeated group, that 207 | # is field number 1, one with field number 2 (type_id) and one with field 208 | # number 3 (message). 209 | total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3)) 210 | 211 | # Add the number of bytes for type_id. 212 | total_size += _VarUInt64ByteSizeNoTag(field_number) 213 | 214 | message_size = msg.ByteSize() 215 | 216 | # The number of bytes for encoding the length of the message. 217 | total_size += _VarUInt64ByteSizeNoTag(message_size) 218 | 219 | # The size of the message. 220 | total_size += message_size 221 | return total_size 222 | 223 | 224 | def TagByteSize(field_number): 225 | """Returns the bytes required to serialize a tag with this field number.""" 226 | # Just pass in type 0, since the type won't affect the tag+type size. 227 | return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0)) 228 | 229 | 230 | # Private helper function for the *ByteSize() functions above. 231 | 232 | def _VarUInt64ByteSizeNoTag(uint64): 233 | """Returns the number of bytes required to serialize a single varint 234 | using boundary value comparisons. (unrolled loop optimization -WPierce) 235 | uint64 must be unsigned. 236 | """ 237 | if uint64 <= 0x7f: return 1 238 | if uint64 <= 0x3fff: return 2 239 | if uint64 <= 0x1fffff: return 3 240 | if uint64 <= 0xfffffff: return 4 241 | if uint64 <= 0x7ffffffff: return 5 242 | if uint64 <= 0x3ffffffffff: return 6 243 | if uint64 <= 0x1ffffffffffff: return 7 244 | if uint64 <= 0xffffffffffffff: return 8 245 | if uint64 <= 0x7fffffffffffffff: return 9 246 | if uint64 > UINT64_MAX: 247 | raise message.EncodeError('Value out of range: %d' % uint64) 248 | return 10 249 | 250 | 251 | NON_PACKABLE_TYPES = ( 252 | descriptor.FieldDescriptor.TYPE_STRING, 253 | descriptor.FieldDescriptor.TYPE_GROUP, 254 | descriptor.FieldDescriptor.TYPE_MESSAGE, 255 | descriptor.FieldDescriptor.TYPE_BYTES 256 | ) 257 | 258 | 259 | def IsTypePackable(field_type): 260 | """Return true iff packable = true is valid for fields of this type. 261 | 262 | Args: 263 | field_type: a FieldDescriptor::Type value. 264 | 265 | Returns: 266 | True iff fields of this type are packable. 267 | """ 268 | return field_type not in NON_PACKABLE_TYPES 269 | -------------------------------------------------------------------------------- /google/protobuf/message.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | # TODO(robinson): We should just make these methods all "pure-virtual" and move 32 | # all implementation out, into reflection.py for now. 33 | 34 | 35 | """Contains an abstract base class for protocol messages.""" 36 | 37 | __author__ = 'robinson@google.com (Will Robinson)' 38 | 39 | 40 | class Error(Exception): pass 41 | class DecodeError(Error): pass 42 | class EncodeError(Error): pass 43 | 44 | 45 | class Message(object): 46 | 47 | """Abstract base class for protocol messages. 48 | 49 | Protocol message classes are almost always generated by the protocol 50 | compiler. These generated types subclass Message and implement the methods 51 | shown below. 52 | 53 | TODO(robinson): Link to an HTML document here. 54 | 55 | TODO(robinson): Document that instances of this class will also 56 | have an Extensions attribute with __getitem__ and __setitem__. 57 | Again, not sure how to best convey this. 58 | 59 | TODO(robinson): Document that the class must also have a static 60 | RegisterExtension(extension_field) method. 61 | Not sure how to best express at this point. 62 | """ 63 | 64 | # TODO(robinson): Document these fields and methods. 65 | 66 | __slots__ = [] 67 | 68 | DESCRIPTOR = None 69 | 70 | def __deepcopy__(self, memo=None): 71 | clone = type(self)() 72 | clone.MergeFrom(self) 73 | return clone 74 | 75 | def __eq__(self, other_msg): 76 | """Recursively compares two messages by value and structure.""" 77 | raise NotImplementedError 78 | 79 | def __ne__(self, other_msg): 80 | # Can't just say self != other_msg, since that would infinitely recurse. :) 81 | return not self == other_msg 82 | 83 | def __hash__(self): 84 | raise TypeError('unhashable object') 85 | 86 | def __str__(self): 87 | """Outputs a human-readable representation of the message.""" 88 | raise NotImplementedError 89 | 90 | def __unicode__(self): 91 | """Outputs a human-readable representation of the message.""" 92 | raise NotImplementedError 93 | 94 | def MergeFrom(self, other_msg): 95 | """Merges the contents of the specified message into current message. 96 | 97 | This method merges the contents of the specified message into the current 98 | message. Singular fields that are set in the specified message overwrite 99 | the corresponding fields in the current message. Repeated fields are 100 | appended. Singular sub-messages and groups are recursively merged. 101 | 102 | Args: 103 | other_msg: Message to merge into the current message. 104 | """ 105 | raise NotImplementedError 106 | 107 | def CopyFrom(self, other_msg): 108 | """Copies the content of the specified message into the current message. 109 | 110 | The method clears the current message and then merges the specified 111 | message using MergeFrom. 112 | 113 | Args: 114 | other_msg: Message to copy into the current one. 115 | """ 116 | if self is other_msg: 117 | return 118 | self.Clear() 119 | self.MergeFrom(other_msg) 120 | 121 | def Clear(self): 122 | """Clears all data that was set in the message.""" 123 | raise NotImplementedError 124 | 125 | def SetInParent(self): 126 | """Mark this as present in the parent. 127 | 128 | This normally happens automatically when you assign a field of a 129 | sub-message, but sometimes you want to make the sub-message 130 | present while keeping it empty. If you find yourself using this, 131 | you may want to reconsider your design.""" 132 | raise NotImplementedError 133 | 134 | def IsInitialized(self): 135 | """Checks if the message is initialized. 136 | 137 | Returns: 138 | The method returns True if the message is initialized (i.e. all of its 139 | required fields are set). 140 | """ 141 | raise NotImplementedError 142 | 143 | # TODO(robinson): MergeFromString() should probably return None and be 144 | # implemented in terms of a helper that returns the # of bytes read. Our 145 | # deserialization routines would use the helper when recursively 146 | # deserializing, but the end user would almost always just want the no-return 147 | # MergeFromString(). 148 | 149 | def MergeFromString(self, serialized): 150 | """Merges serialized protocol buffer data into this message. 151 | 152 | When we find a field in |serialized| that is already present 153 | in this message: 154 | - If it's a "repeated" field, we append to the end of our list. 155 | - Else, if it's a scalar, we overwrite our field. 156 | - Else, (it's a nonrepeated composite), we recursively merge 157 | into the existing composite. 158 | 159 | TODO(robinson): Document handling of unknown fields. 160 | 161 | Args: 162 | serialized: Any object that allows us to call buffer(serialized) 163 | to access a string of bytes using the buffer interface. 164 | 165 | TODO(robinson): When we switch to a helper, this will return None. 166 | 167 | Returns: 168 | The number of bytes read from |serialized|. 169 | For non-group messages, this will always be len(serialized), 170 | but for messages which are actually groups, this will 171 | generally be less than len(serialized), since we must 172 | stop when we reach an END_GROUP tag. Note that if 173 | we *do* stop because of an END_GROUP tag, the number 174 | of bytes returned does not include the bytes 175 | for the END_GROUP tag information. 176 | """ 177 | raise NotImplementedError 178 | 179 | def ParseFromString(self, serialized): 180 | """Like MergeFromString(), except we clear the object first.""" 181 | self.Clear() 182 | self.MergeFromString(serialized) 183 | 184 | def SerializeToString(self): 185 | """Serializes the protocol message to a binary string. 186 | 187 | Returns: 188 | A binary string representation of the message if all of the required 189 | fields in the message are set (i.e. the message is initialized). 190 | 191 | Raises: 192 | message.EncodeError if the message isn't initialized. 193 | """ 194 | raise NotImplementedError 195 | 196 | def SerializePartialToString(self): 197 | """Serializes the protocol message to a binary string. 198 | 199 | This method is similar to SerializeToString but doesn't check if the 200 | message is initialized. 201 | 202 | Returns: 203 | A string representation of the partial message. 204 | """ 205 | raise NotImplementedError 206 | 207 | # TODO(robinson): Decide whether we like these better 208 | # than auto-generated has_foo() and clear_foo() methods 209 | # on the instances themselves. This way is less consistent 210 | # with C++, but it makes reflection-type access easier and 211 | # reduces the number of magically autogenerated things. 212 | # 213 | # TODO(robinson): Be sure to document (and test) exactly 214 | # which field names are accepted here. Are we case-sensitive? 215 | # What do we do with fields that share names with Python keywords 216 | # like 'lambda' and 'yield'? 217 | # 218 | # nnorwitz says: 219 | # """ 220 | # Typically (in python), an underscore is appended to names that are 221 | # keywords. So they would become lambda_ or yield_. 222 | # """ 223 | def ListFields(self): 224 | """Returns a list of (FieldDescriptor, value) tuples for all 225 | fields in the message which are not empty. A singular field is non-empty 226 | if HasField() would return true, and a repeated field is non-empty if 227 | it contains at least one element. The fields are ordered by field 228 | number""" 229 | raise NotImplementedError 230 | 231 | def HasField(self, field_name): 232 | """Checks if a certain field is set for the message. Note if the 233 | field_name is not defined in the message descriptor, ValueError will be 234 | raised.""" 235 | raise NotImplementedError 236 | 237 | def ClearField(self, field_name): 238 | raise NotImplementedError 239 | 240 | def HasExtension(self, extension_handle): 241 | raise NotImplementedError 242 | 243 | def ClearExtension(self, extension_handle): 244 | raise NotImplementedError 245 | 246 | def ByteSize(self): 247 | """Returns the serialized size of this message. 248 | Recursively calls ByteSize() on all contained messages. 249 | """ 250 | raise NotImplementedError 251 | 252 | def _SetListener(self, message_listener): 253 | """Internal method used by the protocol message implementation. 254 | Clients should not call this directly. 255 | 256 | Sets a listener that this message will call on certain state transitions. 257 | 258 | The purpose of this method is to register back-edges from children to 259 | parents at runtime, for the purpose of setting "has" bits and 260 | byte-size-dirty bits in the parent and ancestor objects whenever a child or 261 | descendant object is modified. 262 | 263 | If the client wants to disconnect this Message from the object tree, she 264 | explicitly sets callback to None. 265 | 266 | If message_listener is None, unregisters any existing listener. Otherwise, 267 | message_listener must implement the MessageListener interface in 268 | internal/message_listener.py, and we discard any listener registered 269 | via a previous _SetListener() call. 270 | """ 271 | raise NotImplementedError 272 | 273 | def __getstate__(self): 274 | """Support the pickle protocol.""" 275 | return dict(serialized=self.SerializePartialToString()) 276 | 277 | def __setstate__(self, state): 278 | """Support the pickle protocol.""" 279 | self.__init__() 280 | self.ParseFromString(state['serialized']) 281 | -------------------------------------------------------------------------------- /google/protobuf/message_factory.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Provides a factory class for generating dynamic messages.""" 32 | 33 | __author__ = 'matthewtoia@google.com (Matt Toia)' 34 | 35 | from google.protobuf import descriptor_database 36 | from google.protobuf import descriptor_pool 37 | from google.protobuf import message 38 | from google.protobuf import reflection 39 | 40 | 41 | class MessageFactory(object): 42 | """Factory for creating Proto2 messages from descriptors in a pool.""" 43 | 44 | def __init__(self): 45 | """Initializes a new factory.""" 46 | self._classes = {} 47 | 48 | def GetPrototype(self, descriptor): 49 | """Builds a proto2 message class based on the passed in descriptor. 50 | 51 | Passing a descriptor with a fully qualified name matching a previous 52 | invocation will cause the same class to be returned. 53 | 54 | Args: 55 | descriptor: The descriptor to build from. 56 | 57 | Returns: 58 | A class describing the passed in descriptor. 59 | """ 60 | 61 | if descriptor.full_name not in self._classes: 62 | result_class = reflection.GeneratedProtocolMessageType( 63 | descriptor.name.encode('ascii', 'ignore'), 64 | (message.Message,), 65 | {'DESCRIPTOR': descriptor}) 66 | self._classes[descriptor.full_name] = result_class 67 | for field in descriptor.fields: 68 | if field.message_type: 69 | self.GetPrototype(field.message_type) 70 | return self._classes[descriptor.full_name] 71 | 72 | 73 | _DB = descriptor_database.DescriptorDatabase() 74 | _POOL = descriptor_pool.DescriptorPool(_DB) 75 | _FACTORY = MessageFactory() 76 | 77 | 78 | def GetMessages(file_protos): 79 | """Builds a dictionary of all the messages available in a set of files. 80 | 81 | Args: 82 | file_protos: A sequence of file protos to build messages out of. 83 | 84 | Returns: 85 | A dictionary containing all the message types in the files mapping the 86 | fully qualified name to a Message subclass for the descriptor. 87 | """ 88 | 89 | result = {} 90 | for file_proto in file_protos: 91 | _DB.Add(file_proto) 92 | for file_proto in file_protos: 93 | for desc in _GetAllDescriptors(file_proto.message_type, file_proto.package): 94 | result[desc.full_name] = _FACTORY.GetPrototype(desc) 95 | return result 96 | 97 | 98 | def _GetAllDescriptors(desc_protos, package): 99 | """Gets all levels of nested message types as a flattened list of descriptors. 100 | 101 | Args: 102 | desc_protos: The descriptor protos to process. 103 | package: The package where the protos are defined. 104 | 105 | Yields: 106 | Each message descriptor for each nested type. 107 | """ 108 | 109 | for desc_proto in desc_protos: 110 | name = '.'.join((package, desc_proto.name)) 111 | yield _POOL.FindMessageTypeByName(name) 112 | for nested_desc in _GetAllDescriptors(desc_proto.nested_type, name): 113 | yield nested_desc 114 | -------------------------------------------------------------------------------- /google/protobuf/reflection.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | # This code is meant to work on Python 2.4 and above only. 32 | 33 | """Contains a metaclass and helper functions used to create 34 | protocol message classes from Descriptor objects at runtime. 35 | 36 | Recall that a metaclass is the "type" of a class. 37 | (A class is to a metaclass what an instance is to a class.) 38 | 39 | In this case, we use the GeneratedProtocolMessageType metaclass 40 | to inject all the useful functionality into the classes 41 | output by the protocol compiler at compile-time. 42 | 43 | The upshot of all this is that the real implementation 44 | details for ALL pure-Python protocol buffers are *here in 45 | this file*. 46 | """ 47 | 48 | __author__ = 'robinson@google.com (Will Robinson)' 49 | 50 | 51 | from google.protobuf.internal import api_implementation 52 | from google.protobuf import descriptor as descriptor_mod 53 | from google.protobuf import message 54 | 55 | _FieldDescriptor = descriptor_mod.FieldDescriptor 56 | 57 | 58 | if api_implementation.Type() == 'cpp': 59 | if api_implementation.Version() == 2: 60 | from google.protobuf.internal.cpp import cpp_message 61 | _NewMessage = cpp_message.NewMessage 62 | _InitMessage = cpp_message.InitMessage 63 | else: 64 | from google.protobuf.internal import cpp_message 65 | _NewMessage = cpp_message.NewMessage 66 | _InitMessage = cpp_message.InitMessage 67 | else: 68 | from google.protobuf.internal import python_message 69 | _NewMessage = python_message.NewMessage 70 | _InitMessage = python_message.InitMessage 71 | 72 | 73 | class GeneratedProtocolMessageType(type): 74 | 75 | """Metaclass for protocol message classes created at runtime from Descriptors. 76 | 77 | We add implementations for all methods described in the Message class. We 78 | also create properties to allow getting/setting all fields in the protocol 79 | message. Finally, we create slots to prevent users from accidentally 80 | "setting" nonexistent fields in the protocol message, which then wouldn't get 81 | serialized / deserialized properly. 82 | 83 | The protocol compiler currently uses this metaclass to create protocol 84 | message classes at runtime. Clients can also manually create their own 85 | classes at runtime, as in this example: 86 | 87 | mydescriptor = Descriptor(.....) 88 | class MyProtoClass(Message): 89 | __metaclass__ = GeneratedProtocolMessageType 90 | DESCRIPTOR = mydescriptor 91 | myproto_instance = MyProtoClass() 92 | myproto.foo_field = 23 93 | ... 94 | """ 95 | 96 | # Must be consistent with the protocol-compiler code in 97 | # proto2/compiler/internal/generator.*. 98 | _DESCRIPTOR_KEY = 'DESCRIPTOR' 99 | 100 | def __new__(cls, name, bases, dictionary): 101 | """Custom allocation for runtime-generated class types. 102 | 103 | We override __new__ because this is apparently the only place 104 | where we can meaningfully set __slots__ on the class we're creating(?). 105 | (The interplay between metaclasses and slots is not very well-documented). 106 | 107 | Args: 108 | name: Name of the class (ignored, but required by the 109 | metaclass protocol). 110 | bases: Base classes of the class we're constructing. 111 | (Should be message.Message). We ignore this field, but 112 | it's required by the metaclass protocol 113 | dictionary: The class dictionary of the class we're 114 | constructing. dictionary[_DESCRIPTOR_KEY] must contain 115 | a Descriptor object describing this protocol message 116 | type. 117 | 118 | Returns: 119 | Newly-allocated class. 120 | """ 121 | descriptor = dictionary[GeneratedProtocolMessageType._DESCRIPTOR_KEY] 122 | bases = _NewMessage(bases, descriptor, dictionary) 123 | superclass = super(GeneratedProtocolMessageType, cls) 124 | 125 | new_class = superclass.__new__(cls, name, bases, dictionary) 126 | setattr(descriptor, '_concrete_class', new_class) 127 | return new_class 128 | 129 | def __init__(cls, name, bases, dictionary): 130 | """Here we perform the majority of our work on the class. 131 | We add enum getters, an __init__ method, implementations 132 | of all Message methods, and properties for all fields 133 | in the protocol type. 134 | 135 | Args: 136 | name: Name of the class (ignored, but required by the 137 | metaclass protocol). 138 | bases: Base classes of the class we're constructing. 139 | (Should be message.Message). We ignore this field, but 140 | it's required by the metaclass protocol 141 | dictionary: The class dictionary of the class we're 142 | constructing. dictionary[_DESCRIPTOR_KEY] must contain 143 | a Descriptor object describing this protocol message 144 | type. 145 | """ 146 | descriptor = dictionary[GeneratedProtocolMessageType._DESCRIPTOR_KEY] 147 | _InitMessage(descriptor, cls) 148 | superclass = super(GeneratedProtocolMessageType, cls) 149 | superclass.__init__(name, bases, dictionary) 150 | 151 | 152 | def ParseMessage(descriptor, byte_str): 153 | """Generate a new Message instance from this Descriptor and a byte string. 154 | 155 | Args: 156 | descriptor: Protobuf Descriptor object 157 | byte_str: Serialized protocol buffer byte string 158 | 159 | Returns: 160 | Newly created protobuf Message object. 161 | """ 162 | 163 | class _ResultClass(message.Message): 164 | __metaclass__ = GeneratedProtocolMessageType 165 | DESCRIPTOR = descriptor 166 | 167 | new_msg = _ResultClass() 168 | new_msg.ParseFromString(byte_str) 169 | return new_msg 170 | -------------------------------------------------------------------------------- /google/protobuf/service.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """DEPRECATED: Declares the RPC service interfaces. 32 | 33 | This module declares the abstract interfaces underlying proto2 RPC 34 | services. These are intended to be independent of any particular RPC 35 | implementation, so that proto2 services can be used on top of a variety 36 | of implementations. Starting with version 2.3.0, RPC implementations should 37 | not try to build on these, but should instead provide code generator plugins 38 | which generate code specific to the particular RPC implementation. This way 39 | the generated code can be more appropriate for the implementation in use 40 | and can avoid unnecessary layers of indirection. 41 | """ 42 | 43 | __author__ = 'petar@google.com (Petar Petrov)' 44 | 45 | 46 | class RpcException(Exception): 47 | """Exception raised on failed blocking RPC method call.""" 48 | pass 49 | 50 | 51 | class Service(object): 52 | 53 | """Abstract base interface for protocol-buffer-based RPC services. 54 | 55 | Services themselves are abstract classes (implemented either by servers or as 56 | stubs), but they subclass this base interface. The methods of this 57 | interface can be used to call the methods of the service without knowing 58 | its exact type at compile time (analogous to the Message interface). 59 | """ 60 | 61 | def GetDescriptor(): 62 | """Retrieves this service's descriptor.""" 63 | raise NotImplementedError 64 | 65 | def CallMethod(self, method_descriptor, rpc_controller, 66 | request, done): 67 | """Calls a method of the service specified by method_descriptor. 68 | 69 | If "done" is None then the call is blocking and the response 70 | message will be returned directly. Otherwise the call is asynchronous 71 | and "done" will later be called with the response value. 72 | 73 | In the blocking case, RpcException will be raised on error. 74 | 75 | Preconditions: 76 | * method_descriptor.service == GetDescriptor 77 | * request is of the exact same classes as returned by 78 | GetRequestClass(method). 79 | * After the call has started, the request must not be modified. 80 | * "rpc_controller" is of the correct type for the RPC implementation being 81 | used by this Service. For stubs, the "correct type" depends on the 82 | RpcChannel which the stub is using. 83 | 84 | Postconditions: 85 | * "done" will be called when the method is complete. This may be 86 | before CallMethod() returns or it may be at some point in the future. 87 | * If the RPC failed, the response value passed to "done" will be None. 88 | Further details about the failure can be found by querying the 89 | RpcController. 90 | """ 91 | raise NotImplementedError 92 | 93 | def GetRequestClass(self, method_descriptor): 94 | """Returns the class of the request message for the specified method. 95 | 96 | CallMethod() requires that the request is of a particular subclass of 97 | Message. GetRequestClass() gets the default instance of this required 98 | type. 99 | 100 | Example: 101 | method = service.GetDescriptor().FindMethodByName("Foo") 102 | request = stub.GetRequestClass(method)() 103 | request.ParseFromString(input) 104 | service.CallMethod(method, request, callback) 105 | """ 106 | raise NotImplementedError 107 | 108 | def GetResponseClass(self, method_descriptor): 109 | """Returns the class of the response message for the specified method. 110 | 111 | This method isn't really needed, as the RpcChannel's CallMethod constructs 112 | the response protocol message. It's provided anyway in case it is useful 113 | for the caller to know the response type in advance. 114 | """ 115 | raise NotImplementedError 116 | 117 | 118 | class RpcController(object): 119 | 120 | """An RpcController mediates a single method call. 121 | 122 | The primary purpose of the controller is to provide a way to manipulate 123 | settings specific to the RPC implementation and to find out about RPC-level 124 | errors. The methods provided by the RpcController interface are intended 125 | to be a "least common denominator" set of features which we expect all 126 | implementations to support. Specific implementations may provide more 127 | advanced features (e.g. deadline propagation). 128 | """ 129 | 130 | # Client-side methods below 131 | 132 | def Reset(self): 133 | """Resets the RpcController to its initial state. 134 | 135 | After the RpcController has been reset, it may be reused in 136 | a new call. Must not be called while an RPC is in progress. 137 | """ 138 | raise NotImplementedError 139 | 140 | def Failed(self): 141 | """Returns true if the call failed. 142 | 143 | After a call has finished, returns true if the call failed. The possible 144 | reasons for failure depend on the RPC implementation. Failed() must not 145 | be called before a call has finished. If Failed() returns true, the 146 | contents of the response message are undefined. 147 | """ 148 | raise NotImplementedError 149 | 150 | def ErrorText(self): 151 | """If Failed is true, returns a human-readable description of the error.""" 152 | raise NotImplementedError 153 | 154 | def StartCancel(self): 155 | """Initiate cancellation. 156 | 157 | Advises the RPC system that the caller desires that the RPC call be 158 | canceled. The RPC system may cancel it immediately, may wait awhile and 159 | then cancel it, or may not even cancel the call at all. If the call is 160 | canceled, the "done" callback will still be called and the RpcController 161 | will indicate that the call failed at that time. 162 | """ 163 | raise NotImplementedError 164 | 165 | # Server-side methods below 166 | 167 | def SetFailed(self, reason): 168 | """Sets a failure reason. 169 | 170 | Causes Failed() to return true on the client side. "reason" will be 171 | incorporated into the message returned by ErrorText(). If you find 172 | you need to return machine-readable information about failures, you 173 | should incorporate it into your response protocol buffer and should 174 | NOT call SetFailed(). 175 | """ 176 | raise NotImplementedError 177 | 178 | def IsCanceled(self): 179 | """Checks if the client cancelled the RPC. 180 | 181 | If true, indicates that the client canceled the RPC, so the server may 182 | as well give up on replying to it. The server should still call the 183 | final "done" callback. 184 | """ 185 | raise NotImplementedError 186 | 187 | def NotifyOnCancel(self, callback): 188 | """Sets a callback to invoke on cancel. 189 | 190 | Asks that the given callback be called when the RPC is canceled. The 191 | callback will always be called exactly once. If the RPC completes without 192 | being canceled, the callback will be called after completion. If the RPC 193 | has already been canceled when NotifyOnCancel() is called, the callback 194 | will be called immediately. 195 | 196 | NotifyOnCancel() must be called no more than once per request. 197 | """ 198 | raise NotImplementedError 199 | 200 | 201 | class RpcChannel(object): 202 | 203 | """Abstract interface for an RPC channel. 204 | 205 | An RpcChannel represents a communication line to a service which can be used 206 | to call that service's methods. The service may be running on another 207 | machine. Normally, you should not use an RpcChannel directly, but instead 208 | construct a stub {@link Service} wrapping it. Example: 209 | 210 | Example: 211 | RpcChannel channel = rpcImpl.Channel("remotehost.example.com:1234") 212 | RpcController controller = rpcImpl.Controller() 213 | MyService service = MyService_Stub(channel) 214 | service.MyMethod(controller, request, callback) 215 | """ 216 | 217 | def CallMethod(self, method_descriptor, rpc_controller, 218 | request, response_class, done): 219 | """Calls the method identified by the descriptor. 220 | 221 | Call the given method of the remote service. The signature of this 222 | procedure looks the same as Service.CallMethod(), but the requirements 223 | are less strict in one important way: the request object doesn't have to 224 | be of any specific class as long as its descriptor is method.input_type. 225 | """ 226 | raise NotImplementedError 227 | -------------------------------------------------------------------------------- /google/protobuf/service_reflection.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Contains metaclasses used to create protocol service and service stub 32 | classes from ServiceDescriptor objects at runtime. 33 | 34 | The GeneratedServiceType and GeneratedServiceStubType metaclasses are used to 35 | inject all useful functionality into the classes output by the protocol 36 | compiler at compile-time. 37 | """ 38 | 39 | __author__ = 'petar@google.com (Petar Petrov)' 40 | 41 | 42 | class GeneratedServiceType(type): 43 | 44 | """Metaclass for service classes created at runtime from ServiceDescriptors. 45 | 46 | Implementations for all methods described in the Service class are added here 47 | by this class. We also create properties to allow getting/setting all fields 48 | in the protocol message. 49 | 50 | The protocol compiler currently uses this metaclass to create protocol service 51 | classes at runtime. Clients can also manually create their own classes at 52 | runtime, as in this example: 53 | 54 | mydescriptor = ServiceDescriptor(.....) 55 | class MyProtoService(service.Service): 56 | __metaclass__ = GeneratedServiceType 57 | DESCRIPTOR = mydescriptor 58 | myservice_instance = MyProtoService() 59 | ... 60 | """ 61 | 62 | _DESCRIPTOR_KEY = 'DESCRIPTOR' 63 | 64 | def __init__(cls, name, bases, dictionary): 65 | """Creates a message service class. 66 | 67 | Args: 68 | name: Name of the class (ignored, but required by the metaclass 69 | protocol). 70 | bases: Base classes of the class being constructed. 71 | dictionary: The class dictionary of the class being constructed. 72 | dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object 73 | describing this protocol service type. 74 | """ 75 | # Don't do anything if this class doesn't have a descriptor. This happens 76 | # when a service class is subclassed. 77 | if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary: 78 | return 79 | descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY] 80 | service_builder = _ServiceBuilder(descriptor) 81 | service_builder.BuildService(cls) 82 | 83 | 84 | class GeneratedServiceStubType(GeneratedServiceType): 85 | 86 | """Metaclass for service stubs created at runtime from ServiceDescriptors. 87 | 88 | This class has similar responsibilities as GeneratedServiceType, except that 89 | it creates the service stub classes. 90 | """ 91 | 92 | _DESCRIPTOR_KEY = 'DESCRIPTOR' 93 | 94 | def __init__(cls, name, bases, dictionary): 95 | """Creates a message service stub class. 96 | 97 | Args: 98 | name: Name of the class (ignored, here). 99 | bases: Base classes of the class being constructed. 100 | dictionary: The class dictionary of the class being constructed. 101 | dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object 102 | describing this protocol service type. 103 | """ 104 | super(GeneratedServiceStubType, cls).__init__(name, bases, dictionary) 105 | # Don't do anything if this class doesn't have a descriptor. This happens 106 | # when a service stub is subclassed. 107 | if GeneratedServiceStubType._DESCRIPTOR_KEY not in dictionary: 108 | return 109 | descriptor = dictionary[GeneratedServiceStubType._DESCRIPTOR_KEY] 110 | service_stub_builder = _ServiceStubBuilder(descriptor) 111 | service_stub_builder.BuildServiceStub(cls) 112 | 113 | 114 | class _ServiceBuilder(object): 115 | 116 | """This class constructs a protocol service class using a service descriptor. 117 | 118 | Given a service descriptor, this class constructs a class that represents 119 | the specified service descriptor. One service builder instance constructs 120 | exactly one service class. That means all instances of that class share the 121 | same builder. 122 | """ 123 | 124 | def __init__(self, service_descriptor): 125 | """Initializes an instance of the service class builder. 126 | 127 | Args: 128 | service_descriptor: ServiceDescriptor to use when constructing the 129 | service class. 130 | """ 131 | self.descriptor = service_descriptor 132 | 133 | def BuildService(self, cls): 134 | """Constructs the service class. 135 | 136 | Args: 137 | cls: The class that will be constructed. 138 | """ 139 | 140 | # CallMethod needs to operate with an instance of the Service class. This 141 | # internal wrapper function exists only to be able to pass the service 142 | # instance to the method that does the real CallMethod work. 143 | def _WrapCallMethod(srvc, method_descriptor, 144 | rpc_controller, request, callback): 145 | return self._CallMethod(srvc, method_descriptor, 146 | rpc_controller, request, callback) 147 | self.cls = cls 148 | cls.CallMethod = _WrapCallMethod 149 | cls.GetDescriptor = staticmethod(lambda: self.descriptor) 150 | cls.GetDescriptor.__doc__ = "Returns the service descriptor." 151 | cls.GetRequestClass = self._GetRequestClass 152 | cls.GetResponseClass = self._GetResponseClass 153 | for method in self.descriptor.methods: 154 | setattr(cls, method.name, self._GenerateNonImplementedMethod(method)) 155 | 156 | def _CallMethod(self, srvc, method_descriptor, 157 | rpc_controller, request, callback): 158 | """Calls the method described by a given method descriptor. 159 | 160 | Args: 161 | srvc: Instance of the service for which this method is called. 162 | method_descriptor: Descriptor that represent the method to call. 163 | rpc_controller: RPC controller to use for this method's execution. 164 | request: Request protocol message. 165 | callback: A callback to invoke after the method has completed. 166 | """ 167 | if method_descriptor.containing_service != self.descriptor: 168 | raise RuntimeError( 169 | 'CallMethod() given method descriptor for wrong service type.') 170 | method = getattr(srvc, method_descriptor.name) 171 | return method(rpc_controller, request, callback) 172 | 173 | def _GetRequestClass(self, method_descriptor): 174 | """Returns the class of the request protocol message. 175 | 176 | Args: 177 | method_descriptor: Descriptor of the method for which to return the 178 | request protocol message class. 179 | 180 | Returns: 181 | A class that represents the input protocol message of the specified 182 | method. 183 | """ 184 | if method_descriptor.containing_service != self.descriptor: 185 | raise RuntimeError( 186 | 'GetRequestClass() given method descriptor for wrong service type.') 187 | return method_descriptor.input_type._concrete_class 188 | 189 | def _GetResponseClass(self, method_descriptor): 190 | """Returns the class of the response protocol message. 191 | 192 | Args: 193 | method_descriptor: Descriptor of the method for which to return the 194 | response protocol message class. 195 | 196 | Returns: 197 | A class that represents the output protocol message of the specified 198 | method. 199 | """ 200 | if method_descriptor.containing_service != self.descriptor: 201 | raise RuntimeError( 202 | 'GetResponseClass() given method descriptor for wrong service type.') 203 | return method_descriptor.output_type._concrete_class 204 | 205 | def _GenerateNonImplementedMethod(self, method): 206 | """Generates and returns a method that can be set for a service methods. 207 | 208 | Args: 209 | method: Descriptor of the service method for which a method is to be 210 | generated. 211 | 212 | Returns: 213 | A method that can be added to the service class. 214 | """ 215 | return lambda inst, rpc_controller, request, callback: ( 216 | self._NonImplementedMethod(method.name, rpc_controller, callback)) 217 | 218 | def _NonImplementedMethod(self, method_name, rpc_controller, callback): 219 | """The body of all methods in the generated service class. 220 | 221 | Args: 222 | method_name: Name of the method being executed. 223 | rpc_controller: RPC controller used to execute this method. 224 | callback: A callback which will be invoked when the method finishes. 225 | """ 226 | rpc_controller.SetFailed('Method %s not implemented.' % method_name) 227 | callback(None) 228 | 229 | 230 | class _ServiceStubBuilder(object): 231 | 232 | """Constructs a protocol service stub class using a service descriptor. 233 | 234 | Given a service descriptor, this class constructs a suitable stub class. 235 | A stub is just a type-safe wrapper around an RpcChannel which emulates a 236 | local implementation of the service. 237 | 238 | One service stub builder instance constructs exactly one class. It means all 239 | instances of that class share the same service stub builder. 240 | """ 241 | 242 | def __init__(self, service_descriptor): 243 | """Initializes an instance of the service stub class builder. 244 | 245 | Args: 246 | service_descriptor: ServiceDescriptor to use when constructing the 247 | stub class. 248 | """ 249 | self.descriptor = service_descriptor 250 | 251 | def BuildServiceStub(self, cls): 252 | """Constructs the stub class. 253 | 254 | Args: 255 | cls: The class that will be constructed. 256 | """ 257 | 258 | def _ServiceStubInit(stub, rpc_channel): 259 | stub.rpc_channel = rpc_channel 260 | self.cls = cls 261 | cls.__init__ = _ServiceStubInit 262 | for method in self.descriptor.methods: 263 | setattr(cls, method.name, self._GenerateStubMethod(method)) 264 | 265 | def _GenerateStubMethod(self, method): 266 | return (lambda inst, rpc_controller, request, callback=None: 267 | self._StubMethod(inst, method, rpc_controller, request, callback)) 268 | 269 | def _StubMethod(self, stub, method_descriptor, 270 | rpc_controller, request, callback): 271 | """The body of all service methods in the generated stub class. 272 | 273 | Args: 274 | stub: Stub instance. 275 | method_descriptor: Descriptor of the invoked method. 276 | rpc_controller: Rpc controller to execute the method. 277 | request: Request protocol message. 278 | callback: A callback to execute when the method finishes. 279 | Returns: 280 | Response message (in case of blocking call). 281 | """ 282 | return stub.rpc_channel.CallMethod( 283 | method_descriptor, rpc_controller, request, 284 | method_descriptor.output_type._concrete_class, callback) 285 | -------------------------------------------------------------------------------- /google/protobuf/text_format.py: -------------------------------------------------------------------------------- 1 | # Protocol Buffers - Google's data interchange format 2 | # Copyright 2008 Google Inc. All rights reserved. 3 | # http://code.google.com/p/protobuf/ 4 | # 5 | # Redistribution and use in source and binary forms, with or without 6 | # modification, are permitted provided that the following conditions are 7 | # met: 8 | # 9 | # * Redistributions of source code must retain the above copyright 10 | # notice, this list of conditions and the following disclaimer. 11 | # * Redistributions in binary form must reproduce the above 12 | # copyright notice, this list of conditions and the following disclaimer 13 | # in the documentation and/or other materials provided with the 14 | # distribution. 15 | # * Neither the name of Google Inc. nor the names of its 16 | # contributors may be used to endorse or promote products derived from 17 | # this software without specific prior written permission. 18 | # 19 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | 31 | """Contains routines for printing protocol messages in text format.""" 32 | 33 | __author__ = 'kenton@google.com (Kenton Varda)' 34 | 35 | import cStringIO 36 | import re 37 | 38 | from collections import deque 39 | from google.protobuf.internal import type_checkers 40 | from google.protobuf import descriptor 41 | 42 | __all__ = [ 'MessageToString', 'PrintMessage', 'PrintField', 43 | 'PrintFieldValue', 'Merge' ] 44 | 45 | 46 | _INTEGER_CHECKERS = (type_checkers.Uint32ValueChecker(), 47 | type_checkers.Int32ValueChecker(), 48 | type_checkers.Uint64ValueChecker(), 49 | type_checkers.Int64ValueChecker()) 50 | _FLOAT_INFINITY = re.compile('-?inf(?:inity)?f?', re.IGNORECASE) 51 | _FLOAT_NAN = re.compile('nanf?', re.IGNORECASE) 52 | 53 | 54 | class ParseError(Exception): 55 | """Thrown in case of ASCII parsing error.""" 56 | 57 | 58 | def MessageToString(message, as_utf8=False, as_one_line=False): 59 | out = cStringIO.StringIO() 60 | PrintMessage(message, out, as_utf8=as_utf8, as_one_line=as_one_line) 61 | result = out.getvalue() 62 | out.close() 63 | if as_one_line: 64 | return result.rstrip() 65 | return result 66 | 67 | 68 | def PrintMessage(message, out, indent=0, as_utf8=False, as_one_line=False): 69 | for field, value in message.ListFields(): 70 | if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: 71 | for element in value: 72 | PrintField(field, element, out, indent, as_utf8, as_one_line) 73 | else: 74 | PrintField(field, value, out, indent, as_utf8, as_one_line) 75 | 76 | 77 | def PrintField(field, value, out, indent=0, as_utf8=False, as_one_line=False): 78 | """Print a single field name/value pair. For repeated fields, the value 79 | should be a single element.""" 80 | 81 | out.write(' ' * indent); 82 | if field.is_extension: 83 | out.write('[') 84 | if (field.containing_type.GetOptions().message_set_wire_format and 85 | field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and 86 | field.message_type == field.extension_scope and 87 | field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL): 88 | out.write(field.message_type.full_name) 89 | else: 90 | out.write(field.full_name) 91 | out.write(']') 92 | elif field.type == descriptor.FieldDescriptor.TYPE_GROUP: 93 | # For groups, use the capitalized name. 94 | out.write(field.message_type.name) 95 | else: 96 | out.write(field.name) 97 | 98 | if field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_MESSAGE: 99 | # The colon is optional in this case, but our cross-language golden files 100 | # don't include it. 101 | out.write(': ') 102 | 103 | PrintFieldValue(field, value, out, indent, as_utf8, as_one_line) 104 | if as_one_line: 105 | out.write(' ') 106 | else: 107 | out.write('\n') 108 | 109 | 110 | def PrintFieldValue(field, value, out, indent=0, 111 | as_utf8=False, as_one_line=False): 112 | """Print a single field value (not including name). For repeated fields, 113 | the value should be a single element.""" 114 | 115 | if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: 116 | if as_one_line: 117 | out.write(' { ') 118 | PrintMessage(value, out, indent, as_utf8, as_one_line) 119 | out.write('}') 120 | else: 121 | out.write(' {\n') 122 | PrintMessage(value, out, indent + 2, as_utf8, as_one_line) 123 | out.write(' ' * indent + '}') 124 | elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM: 125 | enum_value = field.enum_type.values_by_number.get(value, None) 126 | if enum_value is not None: 127 | out.write(enum_value.name) 128 | else: 129 | out.write(str(value)) 130 | elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING: 131 | out.write('\"') 132 | if type(value) is unicode: 133 | out.write(_CEscape(value.encode('utf-8'), as_utf8)) 134 | else: 135 | out.write(_CEscape(value, as_utf8)) 136 | out.write('\"') 137 | elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL: 138 | if value: 139 | out.write("true") 140 | else: 141 | out.write("false") 142 | else: 143 | out.write(str(value)) 144 | 145 | 146 | def Merge(text, message): 147 | """Merges an ASCII representation of a protocol message into a message. 148 | 149 | Args: 150 | text: Message ASCII representation. 151 | message: A protocol buffer message to merge into. 152 | 153 | Raises: 154 | ParseError: On ASCII parsing problems. 155 | """ 156 | tokenizer = _Tokenizer(text) 157 | while not tokenizer.AtEnd(): 158 | _MergeField(tokenizer, message) 159 | 160 | 161 | def _MergeField(tokenizer, message): 162 | """Merges a single protocol message field into a message. 163 | 164 | Args: 165 | tokenizer: A tokenizer to parse the field name and values. 166 | message: A protocol message to record the data. 167 | 168 | Raises: 169 | ParseError: In case of ASCII parsing problems. 170 | """ 171 | message_descriptor = message.DESCRIPTOR 172 | if tokenizer.TryConsume('['): 173 | name = [tokenizer.ConsumeIdentifier()] 174 | while tokenizer.TryConsume('.'): 175 | name.append(tokenizer.ConsumeIdentifier()) 176 | name = '.'.join(name) 177 | 178 | if not message_descriptor.is_extendable: 179 | raise tokenizer.ParseErrorPreviousToken( 180 | 'Message type "%s" does not have extensions.' % 181 | message_descriptor.full_name) 182 | field = message.Extensions._FindExtensionByName(name) 183 | if not field: 184 | raise tokenizer.ParseErrorPreviousToken( 185 | 'Extension "%s" not registered.' % name) 186 | elif message_descriptor != field.containing_type: 187 | raise tokenizer.ParseErrorPreviousToken( 188 | 'Extension "%s" does not extend message type "%s".' % ( 189 | name, message_descriptor.full_name)) 190 | tokenizer.Consume(']') 191 | else: 192 | name = tokenizer.ConsumeIdentifier() 193 | field = message_descriptor.fields_by_name.get(name, None) 194 | 195 | # Group names are expected to be capitalized as they appear in the 196 | # .proto file, which actually matches their type names, not their field 197 | # names. 198 | if not field: 199 | field = message_descriptor.fields_by_name.get(name.lower(), None) 200 | if field and field.type != descriptor.FieldDescriptor.TYPE_GROUP: 201 | field = None 202 | 203 | if (field and field.type == descriptor.FieldDescriptor.TYPE_GROUP and 204 | field.message_type.name != name): 205 | field = None 206 | 207 | if not field: 208 | raise tokenizer.ParseErrorPreviousToken( 209 | 'Message type "%s" has no field named "%s".' % ( 210 | message_descriptor.full_name, name)) 211 | 212 | if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: 213 | tokenizer.TryConsume(':') 214 | 215 | if tokenizer.TryConsume('<'): 216 | end_token = '>' 217 | else: 218 | tokenizer.Consume('{') 219 | end_token = '}' 220 | 221 | if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: 222 | if field.is_extension: 223 | sub_message = message.Extensions[field].add() 224 | else: 225 | sub_message = getattr(message, field.name).add() 226 | else: 227 | if field.is_extension: 228 | sub_message = message.Extensions[field] 229 | else: 230 | sub_message = getattr(message, field.name) 231 | sub_message.SetInParent() 232 | 233 | while not tokenizer.TryConsume(end_token): 234 | if tokenizer.AtEnd(): 235 | raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token)) 236 | _MergeField(tokenizer, sub_message) 237 | else: 238 | _MergeScalarField(tokenizer, message, field) 239 | 240 | 241 | def _MergeScalarField(tokenizer, message, field): 242 | """Merges a single protocol message scalar field into a message. 243 | 244 | Args: 245 | tokenizer: A tokenizer to parse the field value. 246 | message: A protocol message to record the data. 247 | field: The descriptor of the field to be merged. 248 | 249 | Raises: 250 | ParseError: In case of ASCII parsing problems. 251 | RuntimeError: On runtime errors. 252 | """ 253 | tokenizer.Consume(':') 254 | value = None 255 | 256 | if field.type in (descriptor.FieldDescriptor.TYPE_INT32, 257 | descriptor.FieldDescriptor.TYPE_SINT32, 258 | descriptor.FieldDescriptor.TYPE_SFIXED32): 259 | value = tokenizer.ConsumeInt32() 260 | elif field.type in (descriptor.FieldDescriptor.TYPE_INT64, 261 | descriptor.FieldDescriptor.TYPE_SINT64, 262 | descriptor.FieldDescriptor.TYPE_SFIXED64): 263 | value = tokenizer.ConsumeInt64() 264 | elif field.type in (descriptor.FieldDescriptor.TYPE_UINT32, 265 | descriptor.FieldDescriptor.TYPE_FIXED32): 266 | value = tokenizer.ConsumeUint32() 267 | elif field.type in (descriptor.FieldDescriptor.TYPE_UINT64, 268 | descriptor.FieldDescriptor.TYPE_FIXED64): 269 | value = tokenizer.ConsumeUint64() 270 | elif field.type in (descriptor.FieldDescriptor.TYPE_FLOAT, 271 | descriptor.FieldDescriptor.TYPE_DOUBLE): 272 | value = tokenizer.ConsumeFloat() 273 | elif field.type == descriptor.FieldDescriptor.TYPE_BOOL: 274 | value = tokenizer.ConsumeBool() 275 | elif field.type == descriptor.FieldDescriptor.TYPE_STRING: 276 | value = tokenizer.ConsumeString() 277 | elif field.type == descriptor.FieldDescriptor.TYPE_BYTES: 278 | value = tokenizer.ConsumeByteString() 279 | elif field.type == descriptor.FieldDescriptor.TYPE_ENUM: 280 | value = tokenizer.ConsumeEnum(field) 281 | else: 282 | raise RuntimeError('Unknown field type %d' % field.type) 283 | 284 | if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: 285 | if field.is_extension: 286 | message.Extensions[field].append(value) 287 | else: 288 | getattr(message, field.name).append(value) 289 | else: 290 | if field.is_extension: 291 | message.Extensions[field] = value 292 | else: 293 | setattr(message, field.name, value) 294 | 295 | 296 | class _Tokenizer(object): 297 | """Protocol buffer ASCII representation tokenizer. 298 | 299 | This class handles the lower level string parsing by splitting it into 300 | meaningful tokens. 301 | 302 | It was directly ported from the Java protocol buffer API. 303 | """ 304 | 305 | _WHITESPACE = re.compile('(\\s|(#.*$))+', re.MULTILINE) 306 | _TOKEN = re.compile( 307 | '[a-zA-Z_][0-9a-zA-Z_+-]*|' # an identifier 308 | '[0-9+-][0-9a-zA-Z_.+-]*|' # a number 309 | '\"([^\"\n\\\\]|\\\\.)*(\"|\\\\?$)|' # a double-quoted string 310 | '\'([^\'\n\\\\]|\\\\.)*(\'|\\\\?$)') # a single-quoted string 311 | _IDENTIFIER = re.compile('\w+') 312 | 313 | def __init__(self, text_message): 314 | self._text_message = text_message 315 | 316 | self._position = 0 317 | self._line = -1 318 | self._column = 0 319 | self._token_start = None 320 | self.token = '' 321 | self._lines = deque(text_message.split('\n')) 322 | self._current_line = '' 323 | self._previous_line = 0 324 | self._previous_column = 0 325 | self._SkipWhitespace() 326 | self.NextToken() 327 | 328 | def AtEnd(self): 329 | """Checks the end of the text was reached. 330 | 331 | Returns: 332 | True iff the end was reached. 333 | """ 334 | return self.token == '' 335 | 336 | def _PopLine(self): 337 | while len(self._current_line) <= self._column: 338 | if not self._lines: 339 | self._current_line = '' 340 | return 341 | self._line += 1 342 | self._column = 0 343 | self._current_line = self._lines.popleft() 344 | 345 | def _SkipWhitespace(self): 346 | while True: 347 | self._PopLine() 348 | match = self._WHITESPACE.match(self._current_line, self._column) 349 | if not match: 350 | break 351 | length = len(match.group(0)) 352 | self._column += length 353 | 354 | def TryConsume(self, token): 355 | """Tries to consume a given piece of text. 356 | 357 | Args: 358 | token: Text to consume. 359 | 360 | Returns: 361 | True iff the text was consumed. 362 | """ 363 | if self.token == token: 364 | self.NextToken() 365 | return True 366 | return False 367 | 368 | def Consume(self, token): 369 | """Consumes a piece of text. 370 | 371 | Args: 372 | token: Text to consume. 373 | 374 | Raises: 375 | ParseError: If the text couldn't be consumed. 376 | """ 377 | if not self.TryConsume(token): 378 | raise self._ParseError('Expected "%s".' % token) 379 | 380 | def ConsumeIdentifier(self): 381 | """Consumes protocol message field identifier. 382 | 383 | Returns: 384 | Identifier string. 385 | 386 | Raises: 387 | ParseError: If an identifier couldn't be consumed. 388 | """ 389 | result = self.token 390 | if not self._IDENTIFIER.match(result): 391 | raise self._ParseError('Expected identifier.') 392 | self.NextToken() 393 | return result 394 | 395 | def ConsumeInt32(self): 396 | """Consumes a signed 32bit integer number. 397 | 398 | Returns: 399 | The integer parsed. 400 | 401 | Raises: 402 | ParseError: If a signed 32bit integer couldn't be consumed. 403 | """ 404 | try: 405 | result = ParseInteger(self.token, is_signed=True, is_long=False) 406 | except ValueError, e: 407 | raise self._ParseError(str(e)) 408 | self.NextToken() 409 | return result 410 | 411 | def ConsumeUint32(self): 412 | """Consumes an unsigned 32bit integer number. 413 | 414 | Returns: 415 | The integer parsed. 416 | 417 | Raises: 418 | ParseError: If an unsigned 32bit integer couldn't be consumed. 419 | """ 420 | try: 421 | result = ParseInteger(self.token, is_signed=False, is_long=False) 422 | except ValueError, e: 423 | raise self._ParseError(str(e)) 424 | self.NextToken() 425 | return result 426 | 427 | def ConsumeInt64(self): 428 | """Consumes a signed 64bit integer number. 429 | 430 | Returns: 431 | The integer parsed. 432 | 433 | Raises: 434 | ParseError: If a signed 64bit integer couldn't be consumed. 435 | """ 436 | try: 437 | result = ParseInteger(self.token, is_signed=True, is_long=True) 438 | except ValueError, e: 439 | raise self._ParseError(str(e)) 440 | self.NextToken() 441 | return result 442 | 443 | def ConsumeUint64(self): 444 | """Consumes an unsigned 64bit integer number. 445 | 446 | Returns: 447 | The integer parsed. 448 | 449 | Raises: 450 | ParseError: If an unsigned 64bit integer couldn't be consumed. 451 | """ 452 | try: 453 | result = ParseInteger(self.token, is_signed=False, is_long=True) 454 | except ValueError, e: 455 | raise self._ParseError(str(e)) 456 | self.NextToken() 457 | return result 458 | 459 | def ConsumeFloat(self): 460 | """Consumes an floating point number. 461 | 462 | Returns: 463 | The number parsed. 464 | 465 | Raises: 466 | ParseError: If a floating point number couldn't be consumed. 467 | """ 468 | try: 469 | result = ParseFloat(self.token) 470 | except ValueError, e: 471 | raise self._ParseError(str(e)) 472 | self.NextToken() 473 | return result 474 | 475 | def ConsumeBool(self): 476 | """Consumes a boolean value. 477 | 478 | Returns: 479 | The bool parsed. 480 | 481 | Raises: 482 | ParseError: If a boolean value couldn't be consumed. 483 | """ 484 | try: 485 | result = ParseBool(self.token) 486 | except ValueError, e: 487 | raise self._ParseError(str(e)) 488 | self.NextToken() 489 | return result 490 | 491 | def ConsumeString(self): 492 | """Consumes a string value. 493 | 494 | Returns: 495 | The string parsed. 496 | 497 | Raises: 498 | ParseError: If a string value couldn't be consumed. 499 | """ 500 | bytes = self.ConsumeByteString() 501 | try: 502 | return unicode(bytes, 'utf-8') 503 | except UnicodeDecodeError, e: 504 | raise self._StringParseError(e) 505 | 506 | def ConsumeByteString(self): 507 | """Consumes a byte array value. 508 | 509 | Returns: 510 | The array parsed (as a string). 511 | 512 | Raises: 513 | ParseError: If a byte array value couldn't be consumed. 514 | """ 515 | list = [self._ConsumeSingleByteString()] 516 | while len(self.token) > 0 and self.token[0] in ('\'', '"'): 517 | list.append(self._ConsumeSingleByteString()) 518 | return "".join(list) 519 | 520 | def _ConsumeSingleByteString(self): 521 | """Consume one token of a string literal. 522 | 523 | String literals (whether bytes or text) can come in multiple adjacent 524 | tokens which are automatically concatenated, like in C or Python. This 525 | method only consumes one token. 526 | """ 527 | text = self.token 528 | if len(text) < 1 or text[0] not in ('\'', '"'): 529 | raise self._ParseError('Expected string.') 530 | 531 | if len(text) < 2 or text[-1] != text[0]: 532 | raise self._ParseError('String missing ending quote.') 533 | 534 | try: 535 | result = _CUnescape(text[1:-1]) 536 | except ValueError, e: 537 | raise self._ParseError(str(e)) 538 | self.NextToken() 539 | return result 540 | 541 | def ConsumeEnum(self, field): 542 | try: 543 | result = ParseEnum(field, self.token) 544 | except ValueError, e: 545 | raise self._ParseError(str(e)) 546 | self.NextToken() 547 | return result 548 | 549 | def ParseErrorPreviousToken(self, message): 550 | """Creates and *returns* a ParseError for the previously read token. 551 | 552 | Args: 553 | message: A message to set for the exception. 554 | 555 | Returns: 556 | A ParseError instance. 557 | """ 558 | return ParseError('%d:%d : %s' % ( 559 | self._previous_line + 1, self._previous_column + 1, message)) 560 | 561 | def _ParseError(self, message): 562 | """Creates and *returns* a ParseError for the current token.""" 563 | return ParseError('%d:%d : %s' % ( 564 | self._line + 1, self._column + 1, message)) 565 | 566 | def _StringParseError(self, e): 567 | return self._ParseError('Couldn\'t parse string: ' + str(e)) 568 | 569 | def NextToken(self): 570 | """Reads the next meaningful token.""" 571 | self._previous_line = self._line 572 | self._previous_column = self._column 573 | 574 | self._column += len(self.token) 575 | self._SkipWhitespace() 576 | 577 | if not self._lines and len(self._current_line) <= self._column: 578 | self.token = '' 579 | return 580 | 581 | match = self._TOKEN.match(self._current_line, self._column) 582 | if match: 583 | token = match.group(0) 584 | self.token = token 585 | else: 586 | self.token = self._current_line[self._column] 587 | 588 | 589 | # text.encode('string_escape') does not seem to satisfy our needs as it 590 | # encodes unprintable characters using two-digit hex escapes whereas our 591 | # C++ unescaping function allows hex escapes to be any length. So, 592 | # "\0011".encode('string_escape') ends up being "\\x011", which will be 593 | # decoded in C++ as a single-character string with char code 0x11. 594 | def _CEscape(text, as_utf8): 595 | def escape(c): 596 | o = ord(c) 597 | if o == 10: return r"\n" # optional escape 598 | if o == 13: return r"\r" # optional escape 599 | if o == 9: return r"\t" # optional escape 600 | if o == 39: return r"\'" # optional escape 601 | 602 | if o == 34: return r'\"' # necessary escape 603 | if o == 92: return r"\\" # necessary escape 604 | 605 | # necessary escapes 606 | if not as_utf8 and (o >= 127 or o < 32): return "\\%03o" % o 607 | return c 608 | return "".join([escape(c) for c in text]) 609 | 610 | 611 | _CUNESCAPE_HEX = re.compile(r'(\\+)x([0-9a-fA-F])(?![0-9a-fA-F])') 612 | 613 | 614 | def _CUnescape(text): 615 | def ReplaceHex(m): 616 | # Only replace the match if the number of leading back slashes is odd. i.e. 617 | # the slash itself is not escaped. 618 | if len(m.group(1)) & 1: 619 | return m.group(1) + 'x0' + m.group(2) 620 | return m.group(0) 621 | 622 | # This is required because the 'string_escape' encoding doesn't 623 | # allow single-digit hex escapes (like '\xf'). 624 | result = _CUNESCAPE_HEX.sub(ReplaceHex, text) 625 | return result.decode('string_escape') 626 | 627 | 628 | def ParseInteger(text, is_signed=False, is_long=False): 629 | """Parses an integer. 630 | 631 | Args: 632 | text: The text to parse. 633 | is_signed: True if a signed integer must be parsed. 634 | is_long: True if a long integer must be parsed. 635 | 636 | Returns: 637 | The integer value. 638 | 639 | Raises: 640 | ValueError: Thrown Iff the text is not a valid integer. 641 | """ 642 | # Do the actual parsing. Exception handling is propagated to caller. 643 | try: 644 | result = int(text, 0) 645 | except ValueError: 646 | raise ValueError('Couldn\'t parse integer: %s' % text) 647 | 648 | # Check if the integer is sane. Exceptions handled by callers. 649 | checker = _INTEGER_CHECKERS[2 * int(is_long) + int(is_signed)] 650 | checker.CheckValue(result) 651 | return result 652 | 653 | 654 | def ParseFloat(text): 655 | """Parse a floating point number. 656 | 657 | Args: 658 | text: Text to parse. 659 | 660 | Returns: 661 | The number parsed. 662 | 663 | Raises: 664 | ValueError: If a floating point number couldn't be parsed. 665 | """ 666 | try: 667 | # Assume Python compatible syntax. 668 | return float(text) 669 | except ValueError: 670 | # Check alternative spellings. 671 | if _FLOAT_INFINITY.match(text): 672 | if text[0] == '-': 673 | return float('-inf') 674 | else: 675 | return float('inf') 676 | elif _FLOAT_NAN.match(text): 677 | return float('nan') 678 | else: 679 | # assume '1.0f' format 680 | try: 681 | return float(text.rstrip('f')) 682 | except ValueError: 683 | raise ValueError('Couldn\'t parse float: %s' % text) 684 | 685 | 686 | def ParseBool(text): 687 | """Parse a boolean value. 688 | 689 | Args: 690 | text: Text to parse. 691 | 692 | Returns: 693 | Boolean values parsed 694 | 695 | Raises: 696 | ValueError: If text is not a valid boolean. 697 | """ 698 | if text in ('true', 't', '1'): 699 | return True 700 | elif text in ('false', 'f', '0'): 701 | return False 702 | else: 703 | raise ValueError('Expected "true" or "false".') 704 | 705 | 706 | def ParseEnum(field, value): 707 | """Parse an enum value. 708 | 709 | The value can be specified by a number (the enum value), or by 710 | a string literal (the enum name). 711 | 712 | Args: 713 | field: Enum field descriptor. 714 | value: String value. 715 | 716 | Returns: 717 | Enum value number. 718 | 719 | Raises: 720 | ValueError: If the enum value could not be parsed. 721 | """ 722 | enum_descriptor = field.enum_type 723 | try: 724 | number = int(value, 0) 725 | except ValueError: 726 | # Identifier. 727 | enum_value = enum_descriptor.values_by_name.get(value, None) 728 | if enum_value is None: 729 | raise ValueError( 730 | 'Enum type "%s" has no value named %s.' % ( 731 | enum_descriptor.full_name, value)) 732 | else: 733 | # Numeric value. 734 | enum_value = enum_descriptor.values_by_number.get(number, None) 735 | if enum_value is None: 736 | raise ValueError( 737 | 'Enum type "%s" has no value with number %d.' % ( 738 | enum_descriptor.full_name, number)) 739 | return enum_value.number 740 | -------------------------------------------------------------------------------- /logger.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | 3 | import logging 4 | 5 | __logger_table = {} 6 | 7 | 8 | def get_logger(name): 9 | if name in __logger_table: 10 | return __logger_table[name] 11 | 12 | logger = logging.getLogger(name) 13 | logger.setLevel(logging.DEBUG) 14 | console_handler = logging.StreamHandler() 15 | logger.addHandler(console_handler) 16 | __logger_table[name] = logger 17 | return logger 18 | -------------------------------------------------------------------------------- /rpc/__init__.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | -------------------------------------------------------------------------------- /rpc/rpc_channel.py: -------------------------------------------------------------------------------- 1 | import struct 2 | 3 | from google.protobuf import service 4 | from rpc.rpc_controller import RpcController 5 | import logger 6 | 7 | 8 | class RpcParser(object): 9 | 10 | ST_HEAD = 0 11 | ST_DATA = 1 12 | 13 | def __init__(self, rpc_service, headfmt, indexfmt): 14 | self.logger = logger.get_logger('RpcParser') 15 | self.service = rpc_service 16 | self.headfmt = headfmt 17 | self.indexfmt = indexfmt 18 | self.headsize = struct.calcsize(self.headfmt) 19 | self.indexsize = struct.calcsize(self.indexfmt) 20 | 21 | self.buff = '' 22 | self.stat = RpcParser.ST_HEAD 23 | self.datasize = 0 24 | 25 | def feed(self, data): 26 | rpc_calls = [] 27 | self.buff += data 28 | while True: 29 | if self.stat == RpcParser.ST_HEAD: 30 | self.logger.debug('ST_HEAD: %d/%d' % (len(self.buff), self.headsize)) 31 | if len(self.buff) < self.headsize: 32 | break 33 | 34 | head_data = self.buff[:self.headsize] 35 | self.datasize = struct.unpack(self.headfmt, head_data)[0] 36 | 37 | self.buff = self.buff[self.headsize:] 38 | self.stat = RpcParser.ST_DATA 39 | 40 | if self.stat == RpcParser.ST_DATA: 41 | self.logger.debug('ST_DATA: %d/%d ' % (len(self.buff), self.datasize)) 42 | if len(self.buff) < self.datasize: 43 | break 44 | 45 | index_data = self.buff[:self.indexsize] 46 | request_data = self.buff[self.indexsize: self.datasize] 47 | 48 | index = struct.unpack(self.indexfmt, index_data)[0] 49 | service_descriptor = self.service.GetDescriptor() 50 | 51 | # throw IndexError if index is invalid 52 | method_descriptor = service_descriptor.methods[index] 53 | request = self.service.GetRequestClass(method_descriptor)() 54 | 55 | # throw AttributeError if failed to decode or message is not initialized 56 | request.ParseFromString(request_data) 57 | if not request.IsInitialized(): 58 | raise AttributeError('invalid request data') 59 | 60 | self.buff = self.buff[self.datasize:] 61 | self.stat = RpcParser.ST_HEAD 62 | 63 | rpc_calls.append((method_descriptor, request)) 64 | return rpc_calls 65 | 66 | 67 | class RpcChannel(service.RpcChannel): 68 | 69 | HEAD_FMT = '!I' 70 | INDEX_FMT = '!H' 71 | HEAD_LEN = struct.calcsize(HEAD_FMT) 72 | INDEX_LEN = struct.calcsize(INDEX_FMT) 73 | 74 | def __init__(self, service_local, conn): 75 | super(RpcChannel, self).__init__() 76 | self.logger = logger.get_logger('RpcChannel') 77 | self.service_local = service_local 78 | self.conn = conn 79 | 80 | self.conn.attach_rpc_channel(self) 81 | self.rpc_controller = RpcController(self) 82 | 83 | self.rpc_parser = RpcParser(self.service_local, RpcChannel.HEAD_FMT, RpcChannel.INDEX_FMT) 84 | 85 | def getpeername(self): 86 | if self.conn: 87 | return self.conn.getpeername() 88 | return None, None 89 | 90 | def on_disconnected(self): 91 | self.conn = None 92 | 93 | def disconnect(self): 94 | if self.conn: 95 | self.conn.disconnect() 96 | 97 | def CallMethod(self, 98 | method_descriptor, 99 | rpc_controller, 100 | request, 101 | response_class, 102 | done): 103 | """ called by stub, server_remote interface is maintained by stub """ 104 | index = method_descriptor.index 105 | data = request.SerializeToString() 106 | size = RpcChannel.INDEX_LEN + len(data) 107 | 108 | self.conn.send_data(struct.pack(RpcChannel.HEAD_FMT, size)) 109 | self.conn.send_data(struct.pack(RpcChannel.INDEX_FMT, index)) 110 | self.conn.send_data(data) 111 | # should wait here to receive response if using a synchronous RPC with return value 112 | 113 | def receive(self, data): 114 | """ receive request from remote and call server_local interface """ 115 | try: 116 | rpc_calls = self.rpc_parser.feed(data) 117 | except (AttributeError, IndexError), e: 118 | self.logger.warning('error occured when parsing request, give up and disconnect.') 119 | self.disconnect() 120 | return 121 | 122 | for method_descriptor, request in rpc_calls: 123 | # should call the callback and send response to client if using a synchronous RPC with return value 124 | self.service_local.CallMethod(method_descriptor, self.rpc_controller, request, callback=None) 125 | -------------------------------------------------------------------------------- /rpc/rpc_controller.py: -------------------------------------------------------------------------------- 1 | from google.protobuf import service 2 | 3 | 4 | class RpcController(service.RpcController): 5 | 6 | def __init__(self, rpc_channel): 7 | self.rpc_channel = rpc_channel 8 | -------------------------------------------------------------------------------- /rpc/tcp_client.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | 3 | import socket 4 | from rpc.tcp_connection import TcpConnection 5 | from rpc.rpc_channel import RpcChannel 6 | import logger 7 | 8 | 9 | class TcpClient(TcpConnection): 10 | 11 | def __init__(self, ip, port, service_factory, stub_factory): 12 | TcpConnection.__init__(self, None, (ip, port)) 13 | self.logger = logger.get_logger('TcpClient') 14 | self.service_factory = service_factory 15 | self.stub_factory = stub_factory 16 | self.channel = None 17 | self.service = None 18 | self.stub = None 19 | 20 | def close(self): 21 | self.disconnect() 22 | 23 | def sync_connect(self): 24 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 25 | try: 26 | sock.connect(self.peername) 27 | except socket.error, msg: 28 | sock.close() 29 | self.logger.warning("sync_connect failed %s with remote server %s", msg, self.peername) 30 | return False 31 | 32 | # after connected, do the nonblocking setting 33 | sock.setblocking(0) 34 | self.set_socket(sock) 35 | self.setsockopt() 36 | 37 | self.status = TcpConnection.ST_ESTABLISHED 38 | self.conn_handler.handle_new_connection(self) 39 | return True 40 | 41 | def async_connect(self): 42 | self.create_socket(socket.AF_INET, socket.SOCK_STREAM) 43 | self.setsockopt() 44 | self.connect(self.peername) 45 | 46 | def handle_connect(self): 47 | self.logger.info('connection established.') 48 | self.status = TcpConnection.ST_ESTABLISHED 49 | 50 | self.service = self.service_factory() 51 | self.channel = RpcChannel(self.service, self) 52 | self.stub = self.stub_factory(self.rpc_channel) 53 | 54 | def handle_close(self): 55 | TcpConnection.handle_close(self) 56 | 57 | def writable(self): 58 | if self.status == TcpConnection.ST_ESTABLISHED: 59 | return TcpConnection.writable(self) 60 | else: 61 | return True 62 | -------------------------------------------------------------------------------- /rpc/tcp_connection.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | 3 | import socket 4 | import asyncore 5 | import logger 6 | 7 | 8 | class TcpConnection(asyncore.dispatcher): 9 | 10 | DEFAULT_RECV_BUFFER = 4096 11 | ST_INIT = 0 12 | ST_ESTABLISHED = 1 13 | ST_DISCONNECTED = 2 14 | 15 | def __init__(self, sock, peername): 16 | asyncore.dispatcher.__init__(self, sock) 17 | self.logger = logger.get_logger('TcpConnection') 18 | self.peername = peername 19 | 20 | self.writebuff = '' 21 | self.recv_buff_size = TcpConnection.DEFAULT_RECV_BUFFER 22 | 23 | self.status = TcpConnection.ST_INIT 24 | if sock: 25 | self.status = TcpConnection.ST_ESTABLISHED 26 | self.setsockopt() 27 | 28 | self.rpc_channel = None 29 | 30 | def setsockopt(self): 31 | self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) 32 | 33 | def get_rpc_channel(self): 34 | return self.rpc_channel 35 | 36 | def attach_rpc_channel(self, channel_interface): 37 | self.rpc_channel = channel_interface 38 | 39 | def is_established(self): 40 | return self.status == TcpConnection.ST_ESTABLISHED 41 | 42 | def set_recv_buffer(self, size): 43 | self.recv_buff_size = size 44 | 45 | def disconnect(self): 46 | if self.status == TcpConnection.ST_DISCONNECTED: 47 | return 48 | 49 | if self.rpc_channel: 50 | self.rpc_channel.on_disconnected() 51 | self.rpc_channel = None 52 | 53 | if self.socket: 54 | asyncore.dispatcher.close(self) 55 | 56 | self.status = TcpConnection.ST_DISCONNECTED 57 | 58 | def getpeername(self): 59 | return self.peername 60 | 61 | def handle_close(self): 62 | self.logger.debug('handle_close') 63 | asyncore.dispatcher.handle_close(self) 64 | self.disconnect() 65 | 66 | def handle_expt(self): 67 | self.logger.debug('handle_expt') 68 | asyncore.dispatcher.handle_expt(self) 69 | self.disconnect() 70 | 71 | def handle_error(self): 72 | self.logger.debug('handle_error') 73 | asyncore.dispatcher.handle_error(self) 74 | self.disconnect() 75 | 76 | def handle_read(self): 77 | self.logger.debug('handle_read') 78 | data = self.recv(self.recv_buff_size) 79 | if data: 80 | if not self.rpc_channel: 81 | return 82 | self.rpc_channel.receive(data) 83 | 84 | def handle_write(self): 85 | self.logger.debug('handle_write') 86 | if self.writebuff: 87 | size = self.send(self.writebuff) 88 | self.writebuff = self.writebuff[size:] 89 | 90 | def writable(self): 91 | return len(self.writebuff) > 0 92 | 93 | def send_data(self, data): 94 | self.writebuff += data 95 | -------------------------------------------------------------------------------- /rpc/tcp_server.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | 3 | import socket 4 | import asyncore 5 | 6 | from rpc.tcp_connection import TcpConnection 7 | from rpc.rpc_channel import RpcChannel 8 | import logger 9 | 10 | 11 | class TcpServer(asyncore.dispatcher): 12 | 13 | def __init__(self, ip, port, service_factory): 14 | asyncore.dispatcher.__init__(self) 15 | self.logger = logger.get_logger('TcpServer') 16 | self.ip = ip 17 | self.port = port 18 | self.service_factory = service_factory 19 | 20 | self.create_socket(socket.AF_INET, socket.SOCK_STREAM) 21 | self.set_reuse_addr() 22 | self.bind((self.ip, self.port)) 23 | self.listen(50) 24 | self.logger.info('Server Listening on: ' + str((self.ip, self.port))) 25 | 26 | def handle_accept(self): 27 | try: 28 | sock, addr = self.accept() 29 | except socket.error, e: 30 | self.logger.warning('accept error: ' + e.message) 31 | return 32 | except TypeError, e: 33 | self.logger.warning('accept error: ' + e.message) 34 | return 35 | 36 | self.logger.info('accept client from ' + str(addr)) 37 | conn = TcpConnection(sock, addr) 38 | self.handle_new_connection(conn) 39 | 40 | def stop(self): 41 | self.close() 42 | 43 | def handle_new_connection(self, conn): 44 | self.logger.info('handle_new_connection') 45 | service = self.service_factory() 46 | RpcChannel(service, conn) 47 | 48 | -------------------------------------------------------------------------------- /tests/tcp_server_client_test.py: -------------------------------------------------------------------------------- 1 | __author__ = 'nightfade' 2 | 3 | import unittest 4 | import asyncore 5 | 6 | from example.echo_service_pb2 import EchoString, IEchoService_Stub 7 | from rpc.tcp_server import TcpServer 8 | from rpc.tcp_client import TcpClient 9 | from rpc.tcp_connection import TcpConnection 10 | from rpc.rpc_controller import RpcController 11 | from example.echo_service import EchoService 12 | from example.echo_client import EchoClient 13 | 14 | 15 | class DummyService(object): 16 | pass 17 | 18 | 19 | class DummyStub(object): 20 | def __init__(self, rpc_channel): 21 | pass 22 | 23 | 24 | class EchoRecorder(object): 25 | def __init__(self): 26 | self.record = [] 27 | 28 | def write(self, message): 29 | self.record.append(message) 30 | 31 | 32 | class TcpServerClientTest(unittest.TestCase): 33 | 34 | def setUp(self): 35 | self.ip = '127.0.0.1' 36 | self.port = 65432 37 | 38 | def test_connection(self): 39 | server = TcpServer(self.ip, self.port, DummyService) 40 | client = TcpClient(self.ip, self.port, DummyService, DummyStub) 41 | client.async_connect() 42 | 43 | asyncore.loop(timeout=0.1, count=10) 44 | 45 | self.assertEqual(client.status, TcpConnection.ST_ESTABLISHED) 46 | 47 | server.close() 48 | client.close() 49 | 50 | def test_echo(self): 51 | TcpServer(self.ip, self.port, EchoService) 52 | client = TcpClient(self.ip, self.port, EchoClient, IEchoService_Stub) 53 | 54 | client.async_connect() 55 | 56 | echo_recorder = EchoRecorder() 57 | rpc_count = 0 58 | 59 | for i in xrange(10): 60 | asyncore.loop(0.1, count=1) 61 | if client.stub: 62 | if not client.service.streamout: 63 | client.service.set_streamout(echo_recorder) 64 | request = EchoString() 65 | request.message = str(rpc_count) 66 | controller = RpcController(client.rpc_channel) 67 | client.stub.echo(controller, request, None) 68 | rpc_count += 1 69 | 70 | asyncore.loop(0.1, count=30) 71 | 72 | self.assertEqual(len(echo_recorder.record), rpc_count) 73 | 74 | echo_recorder.record.sort(cmp=lambda x, y: int(x) < int(y)) 75 | for i in xrange(rpc_count): 76 | self.assertEqual(echo_recorder.record[i], str(i)) 77 | --------------------------------------------------------------------------------