├── README.md ├── code ├── constant.py ├── demonstrate_change_rate.py ├── demonstrate_clusters.py ├── demonstrate_correlation.py ├── demonstrate_hotspots.py ├── demonstrate_tide_effect.py ├── flow_prediction.py ├── statistic_L_function.py ├── statistic_density_base_station.py ├── statistic_density_human_flow.py ├── statistic_hour_user.py ├── statistic_user_hour_direction.py └── statistic_user_hour_distance.py ├── figure ├── 01.png ├── 02.png ├── 03.png ├── 04.png ├── 05.png ├── 06.png ├── 07.png ├── 08.png ├── 09.png ├── 10.png ├── 11.png ├── 12_1.png ├── 12_2.png ├── 13.png ├── 14.png ├── 15.png ├── system.png └── t1.png ├── onspark ├── statistic_basics.py ├── statistic_hour_user.py ├── statistic_pos_hour_user.py ├── statistic_pos_user.py ├── statistic_pos_usergroup.py ├── statistic_user_center.py └── statistic_user_hour_distance.py └── 基于移动网络流量日志的城市空间行为分析.pdf /README.md: -------------------------------------------------------------------------------- 1 | 群体与区域移动行为时空模式挖掘 2 | ============= 3 | 4 | 数据采集与数据集 5 | ---- 6 | 7 | 本项目所采用的数据采集于杭州移动3G网络,网络拓扑和数据采集结构如图所示。在用户移动设备(User Equipment/UE)连接到互联网(Internet)的过程中主要经过了三个环节,包括无线电接入网(Radio Access Network)、核心网(Core Network)和公用网(Public Network)。移动收发基站(Base Transceiver Station/BST)首先将用户数据转发到基站控制器(Base Station Controller/BSC),之后在核心网中,数据包通过Gb接口被路由到业务GPRS支撑结点(Serving GPRS Support Nodes/SGSN),之后被路由到网关GPRS支撑结点(Gateway GPRS Support Nodes/GGSN), 网关GPRS支撑结点提供了内部移动网络和外部互联网之间的连通性,IP数据包通过Gn接口同时被转发到网络流量挖掘平台(Network Traffic Mining Platform/NTMP),经过深度包检测(Deep Packet Inspection/DPI)抽取得到HTTP日志及相关信息,上传并存储于分布式数据仓库HDFS。 8 | 9 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/system.png) 10 | 11 | 本项目所采用的数据集包含2012年8月16至28日大约两周的数据,数据包含移动用户识别码(IMSI)、用户上网过程所所连接基站的位置区编码(LAC)联合小区标识(CI),以及HTTP请求的参数包括时间戳等。本项目通过IMSI字段区分不同用户,并将LAC 联合CI 与基站位置数据相关联转换成为经纬度坐标,结合HTTP请求对应的时间戳即得到用户轨迹。原始数据量约为500GB,采用Spark对数据进行数据融合和统计分析,数据集基本情况如表所示。 12 | 13 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/t1.png) 14 | 15 | 群体时空模式挖掘 16 | ---- 17 | 18 | 本项目首先针对群体时空模式进行挖掘。如图是基站在研究区域范围内分布密度的统计图。可见,城市中心区域的基站分布的密度较大而郊区分布密度较小。一般而言,移动3G网络基站的空间覆盖范围最大可达数百至数千米,因此,本项目研究城市空间区域基本能够被基站完全覆盖。 19 | 20 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/01.png) 21 | 22 | 如图是按小时为时间粒度进行统计的工作日和双休日,城市总接入用户的平均数统计。可见,白天(8时至18时)接入的用户总数较多,而夜间相对较少,同时工作日接入的用户总数较双休日略多。 23 | 24 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/02.png) 25 | 26 | * 空间点模式分析 27 | 28 | 本项目针对城市区域的研究采用了网格分隔法,是指对城市空间区域按照网格进行切分,将地理空间切分成为均等大小的方块,并基于不同网格分块内的用户行为统计对城市空间和居民的时空行为在大尺度范围内进行研究。经实验对比,网格切分的空间粒度选为200至500米最为适宜,一方面能够与基站的定位精度相匹配,另一方面能够保证大多数网格区域内都具有一定的用户数量,使得统计特性得到满足。 29 | 30 | 平面空间点模式分析,是指对空间内大量点的分布规律进行刻画,所使用的方法通常可以被归为两大类,分别用于检测空间过程的一阶或二阶影响。检测空间过程的一阶影响,是指研究点分布的均值随空间位置变化而变化的过程,通常使用的方法是核密度估计(Kernel Density Estimation);检测空间过程的二阶影响,是指对点分布在空间内的相互依赖关系进行考察,通常使用的方法是Ripley’s K函数,该函数使用研究范围内两点之间的距离来对空间点的聚集模式进行度量。基于空间点模式进行分析,能够揭示出城市人口在城市空间内分布的基本规律。 31 | 32 | 如图是针对单日用户连接总数的核密度进行估计的结果,反映出城市空间范围内人口分布密度情况,可见,西湖周边区域作为城市的核心区域人口分布较为密集,而向四周逐渐呈现出扩散递减的趋势。 33 | 34 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/03.png) 35 | 36 | 研究空间过程的二阶影响还可以使用L函数,它是K函数的一个变形,可以用来衡量空间点分布模式随空间尺度变化的规律,优点是能够保持方差相对稳定。如图是工作日和双休日采用L函数得到的统计结果,可见,城市人口分布在空间上呈现出明显的聚集性,并且工作日更为凸显,根据函数的峰值可以推断,城市人口的特征空间尺度约为6200米左右。 37 | 38 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/04.png) 39 | 40 | * 群体移动性分析 41 | 42 | 如图是人口移动距离随时间变化的统计结果,时间窗口粒度为一个小时,图中的箱线图包含了五个主要统计量:最小值、中位数、最大值,以及第一四分位数和第三四分位数,揭示出了数据的基本分布情况,可见,城市人口在工作日相对于双休日的移动平均距离更大,并在早、晚高峰均凸显出较为明显的通勤特征,符合客观规律。 43 | 44 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/05.png) 45 | 46 | 如图是城市人口单日内移动方向和距离的统计结果,图中对每个用户均在两个相邻的时间窗口的近似位置进行连线,并按照东、南、西、北、东南、东北、西南、西北八个方向进行分类统计每个方位角度范围内的平均移动距离。如图分别选取了位于市中心东(C)、南(D)、西(A)、北(B)、中间(E)五个不同地点进行观测,每个方位均对应了一个雷达图,图中各个方向的半径分别表示了单日内人口向各个方向转移距离的期望,可见,从区位关系上看,各个地点的转移方向均不同程度偏向中心城区,说明中心城区也是城市职能的强中心。 47 | 48 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/06.png) 49 | 50 | 区域时空模式挖掘 51 | ---- 52 | 53 | * 热点区域分布 54 | 热点区域是指在时间窗口W内,人口分布最为密集的区域,可以用该时间窗口内连接某区域内所有基站的总人数作为该窗口内该区域的人口密集的估计,并将所有区域中人口密集最高的前百分之X的区域定义为该时间窗口内的热点区域。研究热点区域(人口最为稠密区域)发生的时空模式对城市规划,城市空间的组织和结构调整,以及相关公共资源都有重要的指导意义。城市交通的阻塞与热点区域的发生密切相关,如果不能对热点区域进行良好的检测和管控,将极有可能导致踩踏事件等类似悲剧的发生。理解热点区域的发生规律,将有助于优化城市交通设施的建设,城市公共设施例如网络基站的部署,同时能够作为进一步挖掘的基础,对城市人口激增情况进行预测与预警。 55 | 56 | 如图分别是对工作日、双休日以及白天、夜间各个时间段统计出的单日内热点区域出现的空间位置与频率,其中柱形高度正比于该区域成为热点区域出现的总时长,可见,城市的热点区域对应于城市的核心区域,杭州市区与西湖相邻的东北区域是白天热点区域的高发区域,该区域一方面承担了城市主要商业区的功能,另一方面也是城市的主要交通枢纽,与其南北相邻的两块区域则是夜间热点区域的高发区域,这部分区域以住宅区为主,工作日人口活动区域相对比较集中,双休日较为发散,与人类活动规律相符。 57 | 58 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/07.png) 59 | 60 | 为了更为清晰的对热点区域出现的时空可能存在的周期模式进行研究,采用K-Means聚类方法对各热点区域中热点出现的时间序列进行聚类。如图显示了某周内各个热点区域热点出现的时间分布,若某区域是时间窗口内的热点区域,则用有色色块进行标注,可见,热点主要分为三种模式,白天出现(对应区域可推测为商业区),夜间出现(对应区域可推测为住宅区),突发出现(可能与突发事件相关,例如活动的开展等)。 61 | 62 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/08.png) 63 | 64 | * 区域人流更迭速率 65 | 更迭速率是对单位时间内人口变化程度的度量,可以采用相邻时间窗口内包含的人口差集的大小与人口的并集的大小的比值进行估计。研究人流更迭速率的时空模式有助于从宏观层面对城市的动态情况进行实时监控,基于人流更迭速率能够对城市道路的行车速率进行估计,从而反映出道路的拥堵情况,及时进行交通管控,同时能够有效发现城市区域规划或道路规划中可能存在的不合理之处,经实地调查之后做出相应调整,还有助于理解城市人口的通勤需求,对各种交通资源的配置进行优化。本项目基于网格法对城市不同区域的人流更迭速率分别进行统计,并对比了不同一天内,不同时段不同区域的人流更迭速率,并进行分析和合理推测。 66 | 67 | 如图是对一天内不同时段各区域人流更迭速率的统计结果。上午8时及下午6时对应了一天内通勤的早高峰和晚高峰,相对于中午和夜间时段人流更迭速率更高,同时,人流更迭速率相对较高的区域和城市的主要道路高度重合,这一方面是因为在通勤过程中不同于室内,没有固定网络例如WiFi网络,因此更多用户会采用3G上网,同时在通勤过程中随着用户位置不断变化,连接的基站也不断切换,因此相同区域中不同时间窗口内用户的重叠度较低。 68 | 69 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/09.png) 70 | 71 | 如图是对工作日与双休日所有时段的人流速率进行正则化后的结果,可见,白天的人流速率明显高于夜间,同时在早晚高峰达到最高,双休日的人流速率低于工作日,早晚通勤行为并不明显。因此,工作日的早晚高峰是进行交通管制的必要时段,同时针对不同区域的监控进行异常点检测,还能有效侦测出城市中的交通异常情况,并及时进行疏导,方便人们出行并避免灾难的发生。 72 | 73 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/10.png) 74 | 75 | * 区域差异指数与区域间相似性 76 | 差异指数是指某区域人口密度的天内分布相对于城市所有区域人口密度的天内分布的差异。差异指数反应的是某区域相对于城市整体而言的特异性。需要注意的是,这里所指的人口密度并不是真实人口密度,而是通过区域内连接基站的总人数估计出的人口密度,由于夜间相对于白天使用移动网络的人口数量下降,因此一天内城市总人口密度存在涨落。 77 | 78 | 如图是针对工作日一天内不同时间城市各区域差异指数进行统计并经过正则化后的结果,其中每个子图对应一个时间窗口,包括白天、夜间、早晚高峰等典型时段,各子图表达了差异指数在空间中的分布情况,红色区域和蓝色区域分别对应了人口分布相对密集和稀疏的区域,区域红色越深表示人口分布越密集,越蓝表示人口分布越稀疏,需要注意的是图中仅对人口相对稠密的区域进行统计,部分郊区人眼稀少,差异指数随时间波动严重导致度量不准确。如图可见,夜间时段人口较为密集的分散在杭州市周边区域,而白天时段则较为密集的集中于中心区域,显示出城市人口流动的潮汐效应。 79 | 80 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/11.png) 81 | 82 | 如图首先选取了三个典型的区域进行研究,其中A、B、C三个地点分别对应艮秋立交桥、黄龙时代广场、三塘北村东区。艮秋立交桥是杭州市通勤区域的典型代表,位于杭州市交通主干道交叉路口处;黄龙时代广场是杭州市典型的商业区;三塘北村东区是杭州市典型的住宅区,分别对应了杭州市区三种不同的主要功能区域。 83 | 84 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/12_1.png) 85 | 86 | 如图是三个典型区域的差异指数随时间的变化的分布统计,可见,以艮秋立交桥为代表的城市交通勤区域在早、晚高峰人口分布均呈现出明显的密集性,而以黄龙时代广场为代表的城市商业区和以三塘北村东区为代表的城市住宅区则分别在白天和夜间呈现出密集性。 87 | 88 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/12_2.png) 89 | 90 | 借助差异指数,能够分析城市不同区域的功能特征。根据对典型区域的研究结果,城市中交通要道对应区域在早、晚高峰的通勤时段人口分布将会相对密集,在商业区或办公区,工作日白天等上班时段人口分布将会相对密集,在住宅区或休闲娱乐区域,非工作时段人口分布将会相对密集,因此采用了聚类的方法以各区域差异指数的时间序列作为特征进行聚类,将具有相似差异指数序列(及对应相同功能)的区域聚在一起,挖掘得到不同功能区域在城市空间中的分布情况。 91 | 92 | 以下是以各功能区域差异指数的时间序列作为特征进行K-Means聚类的结果。如图是不同类别区域在城市空间中的分布情况。三种不同颜色对应了三种不同类别的区域,根据三种不同类别区域的空间分布情况,结合城市规划的先验知识,可以推测出,三种不同颜色的区域分别对应了城市中的通勤区、办公区及商业区、娱乐休闲区及住宅区,这三种典型的城市功能区域。 93 | 94 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/13.png) 95 | 96 | 如图是三种不同类别区域的差异指数的均值随时间的变化情况,可见,各个聚类中心的差异指数随时间的变化情况,和上图中三个典型区域的差异指数随时间的变化情况一一对应,印证了之前对三种类别区域功能的假设。 97 | 98 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/14.png) 99 | 100 | 基于差异指数还能够进一步计算出不同区域之间的相关性。一般而言,具有相同功能的区域之间由于差异指数的时间序列类似,因此具有正相关关系,不同功能区域之间可能相关性较弱或因为功能存在互补性而呈现出负相关关系。通过考察区域之间相关关系同区域距离之间的关系,能够获知功能区域分布的集中或分散程度,不同功能区域如果分布较为集中,则可能出现类似北京回龙观等超大规模城区,导致巨大钟摆式的通勤现象,给城市交通造成了很大隐患,不同功能区域如果分布较为分散,则会给城市管理造成更大困难,成为城市发展的阻力。 101 | 102 | 如图是城市不同区域之间的相关性以及区域之间距离关系的回归结果,其中,横坐标是区域之间的距离,采用对数坐标,纵坐标是区域之间的相关性。通过观察,发现区域间相关性与距离存在负对数线性关系。图中的实线是计算出的真实值,虚线是进行对数拟合的结果,可见,在4000米距离尺度范围内区域间呈现出正相关关系,大于4000米则呈现出弱负相关性。 103 | 104 | ![Alt Text](https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/master/figure/15.png) 105 | -------------------------------------------------------------------------------- /code/constant.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # 经纬度范围 4 | lng_min = 120.02 5 | lng_max = 120.48 6 | lat_min = 30.15 7 | lat_max = 30.42 8 | -------------------------------------------------------------------------------- /code/demonstrate_change_rate.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import glob 4 | import fileinput 5 | import numpy as np 6 | from pylab import * 7 | 8 | # 时间粒度为1小时 9 | # 空间粒度为200米 10 | # 20-24为工作日 11 | 12 | ranget, rangex, rangey = 24*2, 225, 150 13 | 14 | def generate_equilibrium(): 15 | equilibrium = [[[[[0,0,0,0] for k in xrange(ranget-1)] for j in xrange(rangey)] for i in xrange(rangex)] for d in xrange(8)] 16 | for d, filename in enumerate(sorted(glob.glob(r"../data/pos_30min_usergroup/*"))): 17 | print filename 18 | mesh = [[[[] for k in xrange(ranget)] for j in xrange(rangey)] for i in xrange(rangex)] 19 | for line in fileinput.input(filename): 20 | part = line.strip().split(" ") 21 | px, py, t, n = int(part[0].split(",")[0]), int(part[0].split(",")[1]), int(part[1]), part[2:] 22 | mesh[px][py][t] = n 23 | fileinput.close() 24 | for i in xrange(rangex): 25 | for j in xrange(rangey): 26 | for t in xrange(ranget-1): 27 | leave = len(set(mesh[i][j][t]).difference(set(mesh[i][j][t+1]))) 28 | enter = len(set(mesh[i][j][t+1]).difference(set(mesh[i][j][t]))) 29 | stay = len(set(mesh[i][j][t+1]).intersection(set(mesh[i][j][t]))) 30 | equilibrium[d][i][j][t] = [leave, enter, enter-leave, stay] 31 | 32 | with open("../data/var/equilibrium.txt","w") as f: 33 | for i in xrange(rangex): 34 | for j in xrange(rangey): 35 | f.write("{0},{1}\t{2}\n".format(i,j,"\t".join([" ".join([",".join([str(x) for x in equilibrium[d][i][j][t]])\ 36 | for t in xrange(ranget-1)]) for d in xrange(8)]))) 37 | 38 | def demonstrate_change_rate(): 39 | tseq = [8,12,16,20] 40 | stat = [[0 for j in xrange(rangey)] for i in xrange(rangex)] 41 | for filename in sorted(glob.glob(r"../data/pos_hour_user#/*"))[1:8]: 42 | for line in fileinput.input(filename): 43 | part = line.strip().split(" ") 44 | px, py, t, n = int(part[0].split(",")[0]), int(part[0].split(",")[1]), int(part[1]), int(part[2]) 45 | stat[px][py] += n 46 | fileinput.close() 47 | stat = [[stat[i][j]/7 for j in xrange(rangey)] for i in xrange(rangex)] 48 | 49 | mesh = [[[0 for j in xrange(rangey)] for i in xrange(rangex)] for t in xrange(len(tseq))] 50 | equilibrium = [[[] for j in xrange(rangey)] for i in xrange(rangex)] 51 | for line in fileinput.input("../data/var/equilibrium.txt"): 52 | part = line.strip().split("\t") 53 | px, py, d = int(part[0].split(",")[0]), int(part[0].split(",")[1]), [[[int(k) for k in j.split(",")] for j in i.split(" ")] for i in part[1:]] 54 | equilibrium[px][py] = d 55 | if stat[px][py] >= 500: 56 | leave = [sum([equilibrium[px][py][d][t][0] for d in xrange(1,8)])/7 for t in xrange(ranget-1)] 57 | enter = [sum([equilibrium[px][py][d][t][1] for d in xrange(1,8)])/7 for t in xrange(ranget-1)] 58 | stay = [sum([equilibrium[px][py][d][t][3] for d in xrange(1,8)])/7 for t in xrange(ranget-1)] 59 | for p, t in enumerate(tseq): 60 | mesh[p][px][py] = sum([float((leave[i]+enter[i])/2)/(stay[i]+1) for i in xrange(len(stay))][2*t-1:2*t+3])/4 61 | fileinput.close() 62 | 63 | plt.figure(figsize=(10,8)) 64 | levels, norm = arange(0, 20, 1), cm.colors.Normalize(vmax=20, vmin=0) 65 | for c, t in enumerate(tseq): 66 | (X, Y), C = meshgrid(np.arange(100), np.arange(100)), np.array(mesh[c])[20:120,20:120] 67 | subplot(2,2,c+1) 68 | cset1 = pcolormesh(X, Y, C.T, cmap=cm.get_cmap("Reds", len(levels)), norm=norm) 69 | plt.axis([0, 100-1, 0, 100-1]) 70 | plt.xticks(np.linspace(0,100,6)) 71 | plt.yticks(np.linspace(0,100,6)) 72 | plt.title('{0}:00 - {1}:00'.format(str(t).zfill(2),str(t+2).zfill(2))) 73 | if c == 0: 74 | plt.xlabel('Longitude grid index /200m') 75 | plt.ylabel('Latitude grid index /200m') 76 | if c == 2: 77 | subplots_adjust(hspace=0.4) 78 | subplots_adjust(bottom=0.1, left=0.1, right=0.9, top=0.9) 79 | cax2 = axes([0.92, 0.10, 0.01, 0.8]) 80 | colorbar(cax=cax2) 81 | show() 82 | # for postfix in ('eps','png'): 83 | # savefig('../figure/{0}/09.{0}'.format(postfix)) 84 | 85 | def demonstrate_change_ratio(generate_log=True): 86 | from scipy import interpolate 87 | from matplotlib.ticker import MultipleLocator, FormatStrFormatter 88 | 89 | if generate_log: 90 | wds, wes = range(1,6), range(0,1)+range(6,8) 91 | equilibrium = [[[] for j in xrange(rangey)] for i in xrange(rangex)] 92 | leave_wd_stat, enter_wd_stat, stay_wd_stat = [0 for i in xrange(ranget-1)], [0 for i in xrange(ranget-1)], [0 for i in xrange(ranget-1)] 93 | leave_we_stat, enter_we_stat, stay_we_stat = [0 for i in xrange(ranget-1)], [0 for i in xrange(ranget-1)], [0 for i in xrange(ranget-1)] 94 | 95 | for line in fileinput.input("../data/var/equilibrium.txt"): 96 | part = line.strip().split("\t") 97 | px, py, day = int(part[0].split(",")[0]), int(part[0].split(",")[1]), [[[int(k) for k in j.split(",")] for j in i.split(" ")] for i in part[1:]] 98 | equilibrium[px][py] = day 99 | leave_wd = [sum([equilibrium[px][py][d][t][0] for d in wds])/len(wds) for t in xrange(ranget-1)] 100 | enter_wd = [sum([equilibrium[px][py][d][t][1] for d in wds])/len(wds) for t in xrange(ranget-1)] 101 | stay_wd = [sum([equilibrium[px][py][d][t][3] for d in wds])/len(wds) for t in xrange(ranget-1)] 102 | leave_we = [sum([equilibrium[px][py][d][t][0] for d in wes])/len(wds) for t in xrange(ranget-1)] 103 | enter_we = [sum([equilibrium[px][py][d][t][1] for d in wes])/len(wes) for t in xrange(ranget-1)] 104 | stay_we = [sum([equilibrium[px][py][d][t][3] for d in wes])/len(wes) for t in xrange(ranget-1)] 105 | leave_wd_stat, enter_wd_stat, stay_wd_stat = [i+j for i,j in zip(leave_wd_stat,leave_wd)], [i+j for i,j in zip(enter_wd_stat,enter_wd)], [i+j for i,j in zip(stay_wd_stat,stay_wd)] 106 | leave_we_stat, enter_we_stat, stay_we_stat = [i+j for i,j in zip(leave_we_stat,leave_we)], [i+j for i,j in zip(enter_we_stat,enter_we)], [i+j for i,j in zip(stay_we_stat,stay_we)] 107 | fileinput.close() 108 | 109 | with open("../data/var/mobility.txt","w") as f: 110 | f.write(" ".join([str(float((leave_wd_stat[i]+enter_wd_stat[i])/2)/(stay_wd_stat[i])) for i in xrange(len(stay_wd_stat))])+"\n") 111 | f.write(" ".join([str(float((leave_we_stat[i]+enter_we_stat[i])/2)/(stay_we_stat[i])) for i in xrange(len(stay_we_stat))])+"\n") 112 | 113 | ratio1 = [float(i) for i in open("../data/var/mobility.txt","r").read().split('\n')[0].split(" ")] 114 | ratio2 = [float(i) for i in open("../data/var/mobility.txt","r").read().split('\n')[1].split(" ")] 115 | maximum, minimum = max(max(ratio1),max(ratio2)), min(min(ratio1),min(ratio2)) 116 | ratio1 = [(i-minimum)/(maximum-minimum) for i in ratio1] 117 | ratio2 = [(i-minimum)/(maximum-minimum) for i in ratio2] 118 | 119 | fig = plt.figure() 120 | ax = fig.add_subplot(111) 121 | for y, linestyle, label in [(ratio1,'k-',"Weekday"), (ratio2,'k--',"Weekend")]: 122 | tck = interpolate.splrep([i/2.0 for i in xrange(ranget-1)] ,y,s=0) 123 | x = np.arange(0,23,0.1) 124 | y = interpolate.splev(x,tck,der=0) 125 | plt.plot(x,y,linestyle,label=label,linewidth=2) 126 | plt.xlim(0,23) 127 | plt.ylim(0,1.05) 128 | plt.xlabel('Time /hour') 129 | plt.ylabel('Mobility Rate') 130 | handles, labels = ax.get_legend_handles_labels() 131 | ax.legend(handles,labels) 132 | xmajorLocator = MultipleLocator(1) 133 | xmajorFormatter = FormatStrFormatter('%d') 134 | ax.xaxis.set_major_locator(xmajorLocator) 135 | ax.xaxis.set_major_formatter(xmajorFormatter) 136 | # show() 137 | for postfix in ('eps','png'): 138 | savefig('../figure/{0}/10.{0}'.format(postfix)) 139 | 140 | 141 | if __name__ == "__main__": 142 | # generate_equilibrium() 143 | demonstrate_change_rate() 144 | # demonstrate_change_ratio(generate_log=False) 145 | 146 | -------------------------------------------------------------------------------- /code/demonstrate_clusters.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import fileinput 4 | import numpy as np 5 | from pylab import * 6 | from constant import * 7 | 8 | # 时间粒度为1小时 9 | # 空间粒度为200米 10 | # 20-24为工作日 11 | 12 | ranget, rangex, rangey = 24, 225, 150 13 | 14 | def demonstrate_examples(): 15 | from scipy import interpolate 16 | from matplotlib.ticker import MultipleLocator, FormatStrFormatter 17 | 18 | differentiate = {} 19 | for line in fileinput.input("../data/var/weekday.txt"): 20 | part = line.strip().split(" ") 21 | x, y, f = int(part[0]), int(part[1]), [float(i) for i in part[2:]] 22 | differentiate[(x,y)] = f 23 | fileinput.close() 24 | 25 | fig = plt.figure() 26 | ax1 = fig.add_subplot(111) 27 | for lng, lat, linestyle, label in [(120.206,30.282,'k-',"A"), (120.132,30.280,'k--',"B"), (120.176,30.310,'k:',"C")]: 28 | px, py = int((lng-lng_min)/(lng_max-lng_min)*rangex), int((lat-lat_min)/(lat_max-lat_min)*rangey) 29 | x, y = [i for i in xrange(ranget)], differentiate[tuple((px,py))] 30 | tck = interpolate.splrep(x,y,s=0) 31 | xnew = np.arange(0,23,0.1) 32 | ynew = interpolate.splev(xnew,tck,der=0) 33 | plt.plot(xnew,ynew,linestyle,label=label,linewidth=2) 34 | plt.plot([0,23],[0,0],'k--') 35 | plt.xlim(0,23) 36 | plt.ylim(-0.06,0.06) 37 | plt.xlabel('Time /hour') 38 | plt.ylabel('Difference index') 39 | handles, labels = ax1.get_legend_handles_labels() 40 | ax1.legend(handles, labels) 41 | xmajorLocator = MultipleLocator(1) 42 | xmajorFormatter = FormatStrFormatter('%d') 43 | ax1.xaxis.set_major_locator(xmajorLocator) 44 | ax1.xaxis.set_major_formatter(xmajorFormatter) 45 | # show() 46 | for postfix in ('eps','png'): 47 | savefig('../figure/{0}/12.{0}'.format(postfix)) 48 | 49 | def demonstrate_clusters(): 50 | from sklearn.cluster import KMeans 51 | from scipy import interpolate 52 | from matplotlib.ticker import MultipleLocator, FormatStrFormatter 53 | 54 | plist, X = [], [] 55 | for line in fileinput.input("../data/var/weekday.txt"): 56 | part = line.strip().split(" ") 57 | x, y, f = int(part[0]), int(part[1]), [float(i) for i in part[2:]] 58 | plist.append([x,y]) 59 | X.append(f) 60 | fileinput.close() 61 | 62 | k_means = KMeans(init='k-means++', n_clusters=3, n_init=10) 63 | k_means.fit(X) 64 | k_means.labels_ = k_means.labels_ 65 | k_means.cluster_centers_ = k_means.cluster_centers_ 66 | 67 | mesh = [[0 for j in xrange(rangey)] for i in xrange(rangex)] 68 | for i in xrange(len(k_means.labels_)): 69 | if k_means.labels_[i] == 0: 70 | mesh[plist[i][0]][plist[i][1]] = 1.5 71 | if k_means.labels_[i] == 1: 72 | mesh[plist[i][0]][plist[i][1]] = 0.6 73 | if k_means.labels_[i] == 2: 74 | mesh[plist[i][0]][plist[i][1]] = -1 75 | 76 | fig = plt.figure() 77 | ax = fig.add_subplot(111) 78 | (X, Y), C = meshgrid(np.arange(100), np.arange(100)), np.array(mesh)[20:120,20:120] 79 | pcolormesh(X, Y, C.T, cmap='RdBu', vmin=-2, vmax=2) 80 | plt.axis([0, 100-1, 0, 100-1]) 81 | plt.xlabel('Longitude grid index /200m') 82 | plt.ylabel('Latitude grid index /200m') 83 | # plt.show() 84 | for postfix in ('eps','png'): 85 | savefig('../figure/{0}/13.{0}'.format(postfix)) 86 | 87 | fig = plt.figure() 88 | ax1 = fig.add_subplot(111) 89 | for _cluster, linestyle, label in [(0,'k-',"Cluster 1"), (1,'k--',"Cluster 2"), (2,'k:',"Cluster 3")]: 90 | x, y = [i for i in xrange(ranget)], k_means.cluster_centers_[_cluster] 91 | tck = interpolate.splrep(x,y,s=0) 92 | xnew = np.arange(0,23,0.1) 93 | ynew = interpolate.splev(xnew,tck,der=0) 94 | plt.plot(xnew,ynew,linestyle,label=label,linewidth=2) 95 | plt.plot([0,23],[0,0],'k--') 96 | plt.xlim(0,23) 97 | plt.ylim(-0.03,0.03) 98 | plt.xlabel('Time /hour') 99 | plt.ylabel('Differentiate index') 100 | handles, labels = ax1.get_legend_handles_labels() 101 | ax1.legend(handles, labels) 102 | xmajorLocator = MultipleLocator(1) 103 | xmajorFormatter = FormatStrFormatter('%d') 104 | ax1.xaxis.set_major_locator(xmajorLocator) 105 | ax1.xaxis.set_major_formatter(xmajorFormatter) 106 | # show() 107 | for postfix in ('eps','png'): 108 | savefig('../figure/{0}/14.{0}'.format(postfix)) 109 | 110 | 111 | if __name__ == "__main__": 112 | # demonstrate_examples() 113 | demonstrate_clusters() 114 | 115 | -------------------------------------------------------------------------------- /code/demonstrate_correlation.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import math 4 | import fileinput 5 | 6 | # 时间粒度为1小时 7 | # 空间粒度为200米 8 | # 20-24为工作日 9 | 10 | ranget, rangex, rangey = 24, 225, 150 11 | 12 | def euclidean(p1,p2): 13 | dist = math.sqrt(sum([pow(i-j,2) for i,j in zip(p1,p2)])) 14 | return dist 15 | 16 | def demonstrate_correlation(function=""): 17 | if function == "generate_log": 18 | plist = [] 19 | for line in fileinput.input("../data/var/weekday.txt"): 20 | part = line.strip().split(" ") 21 | x, y, f = int(part[0]), int(part[1]), [float(i) for i in part[2:]] 22 | plist.append([[x,y],f]) 23 | fileinput.close() 24 | 25 | covariance = {} 26 | for i in range(0, len(plist)-1): 27 | for j in range(i+1, len(plist)): 28 | dist = int(round(euclidean(plist[i][0], plist[j][0]),0)) 29 | covariance[dist] = covariance.get(dist,[0,0,0]) 30 | covariance[dist][0] += 1 31 | covariance[dist][1] += sum([plist[i][1][t]*plist[j][1][t] for t in xrange(24)]) 32 | covariance[dist][2] += sum([(plist[i][1][t]**2+plist[j][1][t]**2)/2 for t in xrange(24)]) 33 | covariance = sorted([{'d':d,'v':covariance[d]} for d in covariance], key=lambda x:x['d'], reverse=False) 34 | 35 | with open("../data/var/covariance.txt","w") as f: 36 | for e in covariance: 37 | f.write("{0}\t{1}\t{2}\n".format(e['d']*200,e['v'][0],e['v'][1]/e['v'][2])) 38 | 39 | if function == "plot_correlation": 40 | import numpy as np 41 | from pylab import * 42 | 43 | covariance = {} 44 | for line in fileinput.input("../data/var/covariance.txt"): 45 | d, r = int(line.strip().split('\t')[0]), float(line.strip().split('\t')[2]) 46 | covariance[d] = r 47 | 48 | fig = plt.figure() 49 | ax = fig.add_subplot(111) 50 | ax.semilogx([(d+1)*200 for d in range(50)], [covariance[(d+1)*200] for d in range(50)],'k-',linewidth=2,label="correlation") 51 | ax.plot([100,2750],[0.5,0],'k:',linewidth=2,label="fitting") 52 | ax.plot([100,20000],[0,0],'k--') 53 | plt.xlim(100,20000) 54 | plt.ylim(-0.1,0.5) 55 | plt.xlabel('Distance /m') 56 | plt.ylabel('Correlation') 57 | handles, labels = ax.get_legend_handles_labels() 58 | ax.legend(handles, labels) 59 | # plt.show() 60 | for postfix in ('eps','png'): 61 | savefig('../figure/{0}/15.{0}'.format(postfix)) 62 | 63 | 64 | if __name__ == "__main__": 65 | demonstrate_correlation(function="generate_log") 66 | demonstrate_correlation(function="plot_correlation") 67 | 68 | -------------------------------------------------------------------------------- /code/demonstrate_hotspots.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import glob 4 | import fileinput 5 | import numpy as np 6 | from pylab import * 7 | from constant import * 8 | 9 | # 时间粒度为1小时 10 | # 空间粒度为200米 11 | # 20-24为工作日 12 | 13 | ranget, rangex, rangey = 24, 225, 150 14 | 15 | def demonstrate_hotspots(function=""): 16 | day = [[[[0]*ranget for j in xrange(rangey)]for i in xrange(rangex)] for d in xrange(8)] 17 | for d, filename in enumerate(sorted(glob.glob(r"../data/pos_hour_user#/*"))): 18 | print filename 19 | for line in fileinput.input(filename): 20 | part = line.strip().split(" ") 21 | px, py, s, c = int(part[0].split(",")[0]), int(part[0].split(",")[1]), int(part[1]), int(part[2]) 22 | day[d][px][py][s] = c 23 | fileinput.close() 24 | 25 | if function == "plot_pattern": 26 | from sklearn.cluster import KMeans 27 | from matplotlib.ticker import MultipleLocator, FormatStrFormatter 28 | 29 | top = [[[0 for t in xrange(7*ranget)] for j in xrange(rangey)] for i in xrange(rangex)] 30 | seq = [[[day[d][i][j][s] for d in xrange(1,8) for s in xrange(ranget)] for j in xrange(rangey)] for i in xrange(rangex)] 31 | for t in xrange(7*ranget): 32 | itemlist = [{"p":[i,j],"v":seq[i][j][t]} for i in xrange(rangex) for j in xrange(rangey)] 33 | for item in sorted(itemlist, key=lambda x:x["v"], reverse=True)[0:30]: 34 | top[item["p"][0]][item["p"][1]][t] = 1 35 | disp = [{"v":top[i][j],"k":-1} for i in xrange(rangex) for j in xrange(rangey) if sum(top[i][j])!=0] 36 | 37 | k_means = KMeans(init='k-means++', n_clusters=3, n_init=10) 38 | k_means.fit([p["v"] for p in disp]) 39 | k_means_labels = k_means.labels_ 40 | k_means_cluster_centers = k_means.cluster_centers_ 41 | for i in xrange(len(k_means_labels)): 42 | disp[i]["k"] = k_means_labels[i] 43 | disp = sorted(disp, key=lambda x:x["k"]) 44 | 45 | fig = plt.figure(figsize=(6,5)) 46 | ax = fig.add_subplot(111) 47 | (X, Y) = meshgrid(np.arange(7*ranget), np.arange(len(disp))) 48 | C = np.array([[x*1.5 for x in i["v"]] if i["k"]==0 else \ 49 | [x*0.6 for x in i["v"]] if i["k"]==1 else \ 50 | [x*-1 for x in i["v"]] for i in disp]) 51 | # plt.pcolormesh(X, Y, C) 52 | plt.pcolormesh(X, Y, C, cmap='RdBu', vmin=-2, vmax=2) 53 | plt.xlim(0,7*ranget-1) 54 | plt.ylim(0,len(disp)-1) 55 | xmajorLocator = MultipleLocator(12) 56 | xmajorFormatter = FormatStrFormatter('%d') 57 | ax.xaxis.set_major_locator(xmajorLocator) 58 | ax.xaxis.set_major_formatter(xmajorFormatter) 59 | plt.xlabel('Time /hour') 60 | plt.ylabel('Region') 61 | for postfix in ('eps','png'): 62 | savefig('../figure/{0}/07.{0}'.format(postfix)) 63 | 64 | if function == "generate_log": 65 | wds, wes, hotspots = [1,2,3,4,5], [0,6,7], {} 66 | for dlist, lable in [(wds,"wd"),(wes,"we")]: 67 | seq = [[[day[d][i][j][s] for d in dlist for s in xrange(ranget)] for j in xrange(rangey)] for i in xrange(rangex)] 68 | top = [[[0 for t in xrange(len(dlist)*ranget)] for j in xrange(rangey)] for i in xrange(rangex)] 69 | 70 | for t in xrange(len(dlist)*ranget): 71 | itemlist = [{"p":[i,j],"v":seq[i][j][t]} for i in xrange(rangex) for j in xrange(rangey)] 72 | for item in sorted(itemlist, key=lambda x:x["v"], reverse=True)[0:30]: 73 | top[item["p"][0]][item["p"][1]][t] = 1 74 | 75 | with open("../data/var/hotspot_{0}.txt".format(lable),"w") as f: 76 | for i in xrange(rangex): 77 | for j in xrange(rangey): 78 | if sum(top[i][j]) != 0: 79 | f.write("{0} {1} {2}\n".format(i,j," ".join([str(x) for x in top[i][j]]))) 80 | 81 | for line in fileinput.input("../data/var/hotspot_{0}.txt".format(lable)): 82 | part = line.strip().split(" ") 83 | px, py, v = int(part[0]), int(part[1]), [int(i) for i in part[2:]] 84 | c1 = round(1.*sum([v[d*24+t] for t in range(8,20) for d in xrange(len(dlist))])/len(dlist),2) 85 | c2 = round(1.*sum([v[d*24+t] for t in range(0,8)+range(20,24) for d in xrange(len(dlist))])/len(dlist),2) 86 | hotspot = hotspots.get((px,py),[0,0,0,0]) 87 | if lable == "wd": 88 | hotspot[:2] = [c1,c2] 89 | if lable == "we": 90 | hotspot[2:] = [c1,c2] 91 | hotspots[(px,py)] = hotspot 92 | fileinput.close() 93 | 94 | with open("../data/var/hotspots.txt","w") as f: 95 | for k,v in hotspots.iteritems(): 96 | lng, lat = round(1.*k[0]/rangex*(lng_max-lng_min)+lng_min,3), round(1.*k[1]/rangey*(lat_max-lat_min)+lat_min,3) 97 | f.write("{0} {1} {2}\n".format(lng,lat," ".join([str(x) for x in v]))) 98 | 99 | 100 | if __name__ == "__main__": 101 | demonstrate_hotspots(function="plot_pattern") 102 | # demonstrate_hotspots(function="generate_log") 103 | 104 | -------------------------------------------------------------------------------- /code/demonstrate_tide_effect.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import glob 4 | import fileinput 5 | import numpy as np 6 | from pylab import * 7 | 8 | # 时间粒度为1小时 9 | # 空间粒度为200米 10 | # 20-24为工作日 11 | 12 | ranget, rangex, rangey = 24, 225, 150 13 | 14 | def demonstrate_tide_effect(): 15 | day = [[[[0]*ranget for j in xrange(rangey)] for i in xrange(rangex)] for d in xrange(8)] 16 | for d, filename in enumerate(sorted(glob.glob(r"../data/pos_hour_user#/*"))): 17 | print filename 18 | for line in fileinput.input(filename): 19 | part = line.strip().split(" ") 20 | px, py, s, c = int(part[0].split(",")[0]), int(part[0].split(",")[1]), int(part[1]), int(part[2]) 21 | day[d][px][py][s] = c 22 | fileinput.close() 23 | 24 | for dlist, fname in [(range(1,6),"weekday"),(range(0,1)+range(6,8),"weekend")]: 25 | mask = np.array([[1 if np.array([[day[df][i][j][kf] for kf in xrange(ranget)] for df in dlist]).sum()/len(dlist)>=10*ranget else 0 \ 26 | for j in xrange(rangey)] for i in xrange(rangex)]).sum() 27 | mesh = [[[sum([day[d][i][j][k] for d in dlist])/len(dlist) if np.array([[day[df][i][j][kf] for kf in xrange(ranget)] for df in dlist]).sum()/len(dlist)>=10*ranget else 0 for k in xrange(ranget)] \ 28 | for j in xrange(rangey)] for i in xrange(rangex)] 29 | mesh = [[[float(mesh[i][j][k])/sum(mesh[i][j]) if sum(mesh[i][j])!=0 else 0 for k in xrange(ranget)] \ 30 | for j in xrange(rangey)] for i in xrange(rangex)] 31 | avg = [float(np.array([[mesh[i][j][k] for j in xrange(rangey)] for i in xrange(rangex)]).sum())/mask for k in xrange(ranget)] 32 | mesh = [[[mesh[i][j][k]-avg[k] if sum(mesh[i][j])!=0 else 0 for k in xrange(ranget)] \ 33 | for j in xrange(rangey)] for i in xrange(rangex)] 34 | with open("../data/var/{0}.txt".format(fname),"w") as f: 35 | for i in xrange(rangex): 36 | for j in xrange(rangey): 37 | if sum(mesh[i][j])!=0: 38 | f.write("{0} {1} {2}\n".format(i,j," ".join([str(round(x,6)) for x in mesh[i][j]]))) 39 | 40 | plt.figure(figsize=(12,8)) 41 | levels = arange(-1, 1.1, 0.1) 42 | cmap, norm = cm.PRGn, cm.colors.Normalize(vmax=1.1, vmin=-1) 43 | for c, t in enumerate([4,8,10,16,18,22]): 44 | colormap = [[0 for j in xrange(rangey)] for i in xrange(rangex)] 45 | for line in fileinput.input("../data/var/weekday.txt"): 46 | part = line.strip().split(" ") 47 | x, y, f = int(part[0]), int(part[1]), float(part[2:][t]) 48 | colormap[x][y] = f 49 | fileinput.close() 50 | cmax = np.array([[abs(colormap[i][j]) for j in xrange(rangey)] for i in xrange(rangex)]).max() 51 | colormap = [[colormap[i][j]/cmax for j in xrange(rangey)] for i in xrange(rangex)] 52 | (X, Y), C = meshgrid(np.arange(100), np.arange(100)), np.array(colormap)[20:120,20:120] 53 | subplot(2,3,c+1) 54 | cset = contourf(X, Y, C.T, levels, cmap=cm.get_cmap("seismic", len(levels)), norm=norm) 55 | plt.axis([0, 100-1, 0, 100-1]) 56 | plt.xticks(np.linspace(0,100,6)) 57 | plt.yticks(np.linspace(0,100,6)) 58 | plt.title('{0}:00'.format(str(t).zfill(2))) 59 | if c == 0: 60 | plt.xlabel('Longitude grid index /200m') 61 | plt.ylabel('Latitude grid index /200m') 62 | if c == 3: 63 | subplots_adjust(hspace=0.4) 64 | subplots_adjust(bottom=0.1, left=0.06, right=0.9, top=0.9) 65 | cax2 = axes([0.92, 0.10, 0.01, 0.8]) 66 | colorbar(cax=cax2) 67 | # show() 68 | for postfix in ('eps','png'): 69 | savefig('../figure/{0}/11.{0}'.format(postfix)) 70 | 71 | 72 | if __name__ == "__main__": 73 | demonstrate_tide_effect() 74 | 75 | -------------------------------------------------------------------------------- /code/flow_prediction.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import glob 4 | import fileinput 5 | import numpy as np 6 | 7 | # 时间粒度为1小时 8 | # 空间粒度为200米 9 | # 20-24为工作日 10 | 11 | ranget, rangex, rangey = 24, 225, 150 12 | 13 | def flow_prediction(method=""): 14 | record = [[[[0]*ranget for j in xrange(rangey)] for i in xrange(rangex)] for d in xrange(8)] 15 | for d, filename in enumerate(sorted(glob.glob(r"../data/pos_hour_user#/*"))): 16 | print filename 17 | for line in fileinput.input(filename): 18 | part = line.strip().split(" ") 19 | px, py, t, c = int(part[0].split(",")[0]), int(part[0].split(",")[1]), int(part[1]), int(part[2]) 20 | record[d][px][py][t] = c 21 | fileinput.close() 22 | record = filter(lambda x:sum(x)>=10*ranget, [record[d][i][j] for d in xrange(8) for i in xrange(rangex) for j in xrange(rangey)]) 23 | 24 | if method == "RNN": 25 | from keras.models import Graph 26 | from keras.layers import recurrent, Dropout, TimeDistributedDense 27 | normal = [[1.*record[r][t]/max(record[r]) for t in xrange(ranget)] for r in xrange(len(record))] 28 | X_train = np.array([[[p] for p in r][:-1] for r in normal]) 29 | y_train = np.array([[[p] for p in r][1:] for r in normal]) 30 | EPOCH_SIZE = 5 31 | HIDDEN_SIZE = 256 32 | RNN = recurrent.GRU # Replace with SimpleRNN, LSTM, GRU 33 | model = Graph() 34 | model.add_input(name='input', input_shape=(ranget-1,1)) 35 | model.add_node(RNN(HIDDEN_SIZE, return_sequences=True), name='forward_l1', input='input') 36 | model.add_node(TimeDistributedDense(1), name='dense', input='forward_l1') 37 | model.add_output(name='output', input='dense') 38 | model.compile('adam', {'output': 'mean_squared_error'}) 39 | model.fit({'input': X_train, 'output': y_train}, nb_epoch=EPOCH_SIZE, show_accuracy=True) 40 | y_pred = model.predict({'input': X_train})['output'] 41 | y_error_total, y_real_total = np.zeros((23,)), np.zeros((23,)) 42 | for r in xrange(len(record)): 43 | y_error_total += abs(np.reshape(max(record[r])*y_pred[r],(23,))-np.array(record[r][1:])) 44 | y_real_total += np.array(record[r][1:]) 45 | print 1.*y_error_total/y_real_total 46 | 47 | if method == "ARIMA": 48 | from statsmodels.tsa.arima_model import ARIMA 49 | normal = np.array(normal)[np.random.choice(len(record), 100)] 50 | y_error_total, y_real_total = np.zeros((23,)), np.zeros((23,)) 51 | for x in normal: 52 | try: 53 | model = ARIMA(np.array(x), order=(2,0,1)).fit(disp=False) 54 | y_error_total += abs(model.predict(1,23)-np.array(x[1:])) 55 | y_real_total += np.array(x[1:]) 56 | except: 57 | continue 58 | print 1.*y_error_total/y_real_total 59 | 60 | def plot_error(): 61 | from pylab import * 62 | from scipy import interpolate 63 | from matplotlib.ticker import MultipleLocator, FormatStrFormatter 64 | 65 | error_RNN = [0]*6 +\ 66 | [0.27470582,0.26526711,0.17191297,0.17505597,0.16296111,0.14432448,\ 67 | 0.13287850,0.13018013,0.12400923,0.12198262,0.12254024,0.13259150,\ 68 | 0.15847546,0.15153314,0.15145022,0.13929038,0.14714874,0.17220128] 69 | error_ARIMA = [0]*6 +\ 70 | [0.39960266,0.43644585,0.21430307,0.26086768,0.18084257,0.24340537,\ 71 | 0.18037208,0.19926624,0.17947106,0.17663812,0.21210755,0.27719361,\ 72 | 0.17571100,0.19287534,0.17002414,0.19231532,0.21721267,0.29783381] 73 | fig = plt.figure() 74 | ax1 = fig.add_subplot(111) 75 | for error, linestyle, label in [(error_RNN,'k-',"RNN"), (error_ARIMA,'k--',"ARIMA")]: 76 | tck = interpolate.splrep(range(len(error)),error,s=0) 77 | xnew = np.arange(0,ranget,0.1) 78 | ynew = interpolate.splev(xnew,tck,der=0) 79 | plt.plot(xnew,ynew,linestyle,label=label,linewidth=2) 80 | plt.xlim(6,23) 81 | plt.ylim(0,0.6) 82 | plt.xlabel('The $N$-th hour of day') 83 | plt.ylabel('Error') 84 | handles, labels = ax1.get_legend_handles_labels() 85 | ax1.legend(handles, labels) 86 | xmajorLocator = MultipleLocator(1) 87 | xmajorFormatter = FormatStrFormatter('%d') 88 | ax1.xaxis.set_major_locator(xmajorLocator) 89 | ax1.xaxis.set_major_formatter(xmajorFormatter) 90 | # show() 91 | for postfix in ('eps','png'): 92 | plt.savefig('../figure/{0}/16.{0}'.format(postfix)) 93 | 94 | 95 | if __name__ == "__main__": 96 | # flow_prediction(method="RNN") 97 | # flow_prediction(method="ARIMA") 98 | plot_error() 99 | 100 | -------------------------------------------------------------------------------- /code/statistic_L_function.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import glob 4 | import math 5 | import fileinput 6 | import numpy as np 7 | from pylab import * 8 | 9 | # 时间粒度为1小时 10 | # 空间粒度为500米 11 | # 20-24为工作日 12 | 13 | ranget, rangex, rangey = 24, 90, 60 14 | 15 | def euclidean(p1,p2): 16 | dist = math.sqrt(sum([pow(i-j,2) for i,j in zip(p1,p2)])) 17 | return dist 18 | 19 | def statistic_L_function(): 20 | from scipy import interpolate 21 | 22 | fig = plt.figure(figsize=(8,5)) 23 | ax1 = fig.add_subplot(111) 24 | 25 | wds, wes = ['0820','0821','0822','0823','0824'], ['0819','0825','0826'] 26 | for flist, linestyle, label in [(wds,'k-',"Weekday"),(wes,'k--',"Weekend")]: 27 | mesh = [[0 for j in xrange(rangey)] for i in xrange(rangex)] 28 | for filename in sorted(glob.glob(r"../data/pos_user#/*")): 29 | if filename.split("/")[-1] in ["{0}.txt".format(fname) for fname in flist]: 30 | for line in fileinput.input(filename): 31 | part = line.strip().split(" ") 32 | px, py, n = int(part[0].split(",")[0]), int(part[0].split(",")[1]), int(part[1]) 33 | mesh[px][py] += n 34 | fileinput.close() 35 | 36 | plist = sorted([mesh[i][j] for i in xrange(rangex) for j in xrange(rangey)], reverse=True)[0:1000] 37 | itemlist = sorted([{"p":[i,j],"v":mesh[i][j]} for i in xrange(rangex) for j in xrange(rangey)], key=lambda x:x["v"], reverse=True)[0:500] 38 | prodlist = [0 for i in xrange(30)] 39 | for i in xrange(500): 40 | for j in xrange(500): 41 | dist, prod = euclidean(itemlist[i]["p"],itemlist[j]["p"]), itemlist[i]["v"]*itemlist[j]["v"] 42 | if i != j and dist < 30: 43 | prodlist[min(int(dist),29)] += prod 44 | prodlist_sum = sum(prodlist) 45 | for i in range(len(prodlist)-1,-1,-1): 46 | prodlist[i] = (math.sqrt((math.pi*30**2)*sum(prodlist[0:i+1])/prodlist_sum/math.pi)-i)*500 47 | 48 | tck = interpolate.splrep([i*500 for i in xrange(30)][1:],prodlist[1:],s=0) 49 | x = np.arange(500,13000,100) 50 | y = interpolate.splev(x,tck,der=0) 51 | plt.plot(x,y,linestyle,label=label,linewidth=2) 52 | 53 | plt.xlim(0,14000) 54 | plt.ylim(0,7000) 55 | plt.xlabel('Distance /m') 56 | plt.ylabel('L(d)') 57 | handles, labels = ax1.get_legend_handles_labels() 58 | ax1.legend(handles,labels) 59 | subplots_adjust(wspace=0.4,hspace=0.4) 60 | # show() 61 | for postfix in ('eps','png'): 62 | savefig('../figure/{0}/04.{0}'.format(postfix)) 63 | 64 | 65 | if __name__ == "__main__": 66 | statistic_L_function() 67 | 68 | -------------------------------------------------------------------------------- /code/statistic_density_base_station.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import fileinput 4 | import numpy as np 5 | from pylab import * 6 | from constant import * 7 | 8 | # 时间粒度为1小时 9 | # 空间粒度为200米 10 | # 20-24为工作日 11 | 12 | ranget, rangex, rangey = 24, 225, 150 13 | 14 | def statistic_density_base_station(): 15 | mesh = [[0 for j in xrange(rangey)] for i in xrange(rangex)] 16 | for line in fileinput.input("../data/base_station/hz_base.txt"): 17 | part = line.strip().split(" ") 18 | lng, lat = float(part[3]), float(part[4]) 19 | if lng_min<=lng= 10*24*2: 39 | f.write("{0} {1} {2} {3}\n".format(i,j,_sum," ".join(["|".join(["{0},{1}".format(p[0]-i,p[1]-j) for p in t]) for t in direc[i][j]]))) 40 | 41 | def plot_user_hour_direction(picx, picy): 42 | from pylab import * 43 | from constant import * 44 | 45 | print 1.*picx/rangex*(lng_max-lng_min)+lng_min, 1.*picy/rangey*(lat_max-lat_min)+lat_min 46 | for line in fileinput.input("../data/var/direction_next.txt"): 47 | part = line.strip().split(" ") 48 | x, y, s, f = int(part[0]), int(part[1]), int(part[2]), [[[int(k) for k in j.split(",")] for j in i.split("|")] if len(i)!=0 else [] for i in part[3:]] 49 | if x == picx and y == picy: 50 | direcs = [[] for d in xrange(8)] 51 | for T in xrange(ranget-1): 52 | if len(f[T])!=0: 53 | for item in f[T]: 54 | angle = math.atan2(item[1],item[0]) 55 | dist = euclidean(item,[0,0]) 56 | if -math.pi/8 < angle <= math.pi/8: 57 | direcs[0].append(dist) 58 | elif math.pi/8 < angle <= math.pi*3/8: 59 | direcs[1].append(dist) 60 | elif math.pi*3/8 < angle <= math.pi*5/8: 61 | direcs[2].append(dist) 62 | elif math.pi*5/8 < angle <= math.pi*7/8: 63 | direcs[3].append(dist) 64 | elif math.pi*7/8 < angle <= math.pi or -math.pi <= angle <= -math.pi*7/8: 65 | direcs[4].append(dist) 66 | elif -math.pi*7/8 < angle <= -math.pi*5/8: 67 | direcs[5].append(dist) 68 | elif -math.pi*5/8 < angle <= -math.pi*3/8: 69 | direcs[6].append(dist) 70 | elif -math.pi*3/8 < angle <= -math.pi/8: 71 | direcs[7].append(dist) 72 | # width = [math.pi/2*len(direcs[i])/sum([len(direcs[j]) for j in xrange(8)]) for i in xrange(8)] 73 | radii = [int(200.0*sum(direcs[i])/len(direcs[i])) for i in xrange(8)] 74 | theta = [math.pi*i/4 for i in xrange(8)] 75 | ax = plt.subplot(111, polar=True) 76 | plt.ylim(0,2000) 77 | # plt.polar(theta,radii) 78 | plt.fill(theta,radii,"y",joinstyle='bevel',color='r',alpha=0.6) 79 | plt.show() 80 | 81 | 82 | if __name__ == "__main__": 83 | # statistic_user_hour_direction() 84 | picx, picy = 38, 76 85 | # picx, picy = 68, 93 86 | # picx, picy = 123, 90 87 | # picx, picy = 74, 22 88 | # picx, picy = 71, 69 89 | plot_user_hour_direction(picx, picy) 90 | 91 | -------------------------------------------------------------------------------- /code/statistic_user_hour_distance.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import glob 4 | import fileinput 5 | from pylab import * 6 | 7 | # 时间粒度为1小时 8 | # 空间粒度为200米 9 | # 20-24为工作日 10 | 11 | ranget, rangex, rangey = 24, 225, 150 12 | 13 | def statistic_user_hour_distance(): 14 | fig = plt.figure() 15 | 16 | wds, wes = ['0820','0821','0822','0823','0824'], ['0819','0825','0826'] 17 | day, stat = [[[] for j in xrange(ranget)] for i in xrange(2)], [[[0,0] for j in xrange(ranget)] for i in xrange(2)] 18 | 19 | for d, (flist, label) in enumerate([(wds,"weekdays"),(wes,"weekends")]): 20 | for filename in sorted(glob.glob(r"../data/user_hour_dist_bs/*")): 21 | if filename.split("/")[-1] in ["{0}.txt".format(fname) for fname in flist]: 22 | for line in fileinput.input(filename): 23 | part = line.strip().split(" ") 24 | t, dist = int(part[1]), int(part[2]) 25 | stat[d][t][0] += 1 26 | if dist > 0: 27 | day[d][t].append(dist) 28 | stat[d][t][1] += 1 29 | fileinput.close() 30 | 31 | ax = fig.add_subplot(2,1,d+1) 32 | plt.xlim(0,ranget-1) 33 | plt.ylim(0,6000) 34 | ax.set_xlabel('Time /hour') 35 | ax.set_ylabel('Distance /m') 36 | plt.title('{0}'.format(label)) 37 | ax.boxplot(day[d],0,'') 38 | 39 | subplots_adjust(wspace=0.2,hspace=0.4) 40 | # show() 41 | for postfix in ('eps','png'): 42 | savefig('../figure/{0}/05.{0}'.format(postfix)) 43 | 44 | 45 | if __name__ == "__main__": 46 | statistic_user_hour_distance() 47 | 48 | -------------------------------------------------------------------------------- /figure/01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/01.png -------------------------------------------------------------------------------- /figure/02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/02.png -------------------------------------------------------------------------------- /figure/03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/03.png -------------------------------------------------------------------------------- /figure/04.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/04.png -------------------------------------------------------------------------------- /figure/05.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/05.png -------------------------------------------------------------------------------- /figure/06.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/06.png -------------------------------------------------------------------------------- /figure/07.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/07.png -------------------------------------------------------------------------------- /figure/08.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/08.png -------------------------------------------------------------------------------- /figure/09.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/09.png -------------------------------------------------------------------------------- /figure/10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/10.png -------------------------------------------------------------------------------- /figure/11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/11.png -------------------------------------------------------------------------------- /figure/12_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/12_1.png -------------------------------------------------------------------------------- /figure/12_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/12_2.png -------------------------------------------------------------------------------- /figure/13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/13.png -------------------------------------------------------------------------------- /figure/14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/14.png -------------------------------------------------------------------------------- /figure/15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/15.png -------------------------------------------------------------------------------- /figure/system.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/system.png -------------------------------------------------------------------------------- /figure/t1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/figure/t1.png -------------------------------------------------------------------------------- /onspark/statistic_basics.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | from operator import add 5 | from pyspark import SparkConf 6 | from pyspark import SparkContext 7 | 8 | # basics 9 | def extract(line): 10 | import time 11 | try: 12 | part = line.strip().replace('\"','').split(",") 13 | TTIME, LAC, CI, IMSI = part[1].split(" "), part[3], part[4], part[5] 14 | pt1, pt2, pt3 = TTIME[0].split("-"), TTIME[1].split("."), TTIME[2] 15 | year, month, day, hour, minute, second = int("20"+pt1[2]), {"AUG":8}[pt1[1]], int(pt1[0]), int(pt2[0]), int(pt2[1]), int(pt2[2]) 16 | hour = hour if hour != 12 else 0 17 | hour = hour if pt3 == "AM" else hour+12 18 | secs = hour*3600+minute*60+second 19 | key = LAC+" "+CI 20 | sl = secs/3600 21 | if bss.has_key(key): 22 | bs = bss[key] 23 | lng, lat = bs["lng"], bs["lat"] 24 | if 120.02<=lng<120.48 and 30.15<=lat<=30.42: 25 | gx, gy = int((lng-120.02)/(120.48-120.02)*225), int((lat-30.15)/(30.42-30.15)*150) 26 | return (str(gx)+","+str(gy), sl, IMSI) 27 | else: 28 | return ("", -1, "") 29 | else: 30 | return ("", -1, "") 31 | except: 32 | return ("", -1, "") 33 | 34 | global bss 35 | 36 | if __name__ == "__main__": 37 | import fileinput 38 | bss = {} 39 | for line in fileinput.input("hz_base.txt"): 40 | part = line.strip().split(" ") 41 | num, lng, lat = part[1]+" "+part[2], float(part[3]), float(part[4]) 42 | bss[num] = {"lng":lng, "lat":lat} 43 | fileinput.close() 44 | conf = SparkConf().setMaster('yarn-client') \ 45 | .setAppName('qiangsiwei') \ 46 | .set('spark.driver.maxResultSize', "8g") 47 | sc = SparkContext(conf = conf) 48 | filename = "0819" 49 | # user# 50 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 51 | counts = lines.map(lambda x : extract(x))\ 52 | .filter(lambda x : x[0]!="" and x[1]!=-1 and x[2]!="")\ 53 | .map(lambda x : (x[2]))\ 54 | .distinct() \ 55 | .map(lambda x : ("user#",1)) \ 56 | .reduceByKey(add) \ 57 | .map(lambda x : str(x[0])+" "+str(x[1])) 58 | output = counts.coalesce(1).saveAsTextFile("./hangzhou/CFF/{0}_basics_user#.csv".format(filename)) 59 | # entry# 60 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 61 | counts = lines.map(lambda x : ("entry#",1)) \ 62 | .reduceByKey(add) \ 63 | .map(lambda x : str(x[0])+" "+str(x[1])) 64 | output = counts.coalesce(1).saveAsTextFile("./hangzhou/CFF/{0}_basics_entry#.csv".format(filename)) 65 | -------------------------------------------------------------------------------- /onspark/statistic_hour_user.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | from operator import add 5 | from pyspark import SparkConf 6 | from pyspark import SparkContext 7 | 8 | # hour_user# 9 | def extract(line): 10 | import time 11 | try: 12 | part = line.strip().replace('\"','').split(",") 13 | TTIME, LAC, CI, IMSI = part[1].split(" "), part[3], part[4], part[5] 14 | pt1, pt2, pt3 = TTIME[0].split("-"), TTIME[1].split("."), TTIME[2] 15 | year, month, day, hour, minute, second = int("20"+pt1[2]), {"AUG":8}[pt1[1]], int(pt1[0]), int(pt2[0]), int(pt2[1]), int(pt2[2]) 16 | hour = hour if hour != 12 else 0 17 | hour = hour if pt3 == "AM" else hour+12 18 | secs = hour*3600+minute*60+second 19 | key = LAC+" "+CI 20 | sl = secs/3600 21 | if bss.has_key(key): 22 | bs = bss[key] 23 | lng, lat = bs["lng"], bs["lat"] 24 | if 120.02<=lng<120.48 and 30.15<=lat<=30.42: 25 | gx, gy = int((lng-120.02)/(120.48-120.02)*225), int((lat-30.15)/(30.42-30.15)*150) 26 | return (str(gx)+","+str(gy), sl, IMSI) 27 | else: 28 | return ("", -1, "") 29 | else: 30 | return ("", -1, "") 31 | except: 32 | return ("", -1, "") 33 | 34 | global bss 35 | 36 | if __name__ == "__main__": 37 | import fileinput 38 | bss = {} 39 | for line in fileinput.input("hz_base.txt"): 40 | part = line.strip().split(" ") 41 | num, lng, lat = part[1]+" "+part[2], float(part[3]), float(part[4]) 42 | bss[num] = {"lng":lng, "lat":lat} 43 | fileinput.close() 44 | conf = SparkConf().setMaster('yarn-client') \ 45 | .setAppName('qiangsiwei') \ 46 | .set('spark.driver.maxResultSize', "8g") 47 | sc = SparkContext(conf = conf) 48 | filename = "0819" 49 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 50 | counts = lines.map(lambda x : extract(x)) \ 51 | .filter(lambda x : x[0]!="" and x[1]!=-1 and x[2]!="") \ 52 | .map(lambda x : (x[1],x[2]))\ 53 | .distinct() \ 54 | .map(lambda x : (x[0],1)) \ 55 | .reduceByKey(add) \ 56 | .sortByKey() \ 57 | .map(lambda x : str(x[0])+" "+str(x[1])) 58 | output = counts.coalesce(1).saveAsTextFile("./hangzhou/CFF/{0}_hour_user#.csv".format(filename)) 59 | -------------------------------------------------------------------------------- /onspark/statistic_pos_hour_user.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | from operator import add 5 | from pyspark import SparkConf 6 | from pyspark import SparkContext 7 | 8 | # pos_hour_user# 9 | def extract(line): 10 | import time 11 | try: 12 | part = line.strip().replace('\"','').split(",") 13 | TTIME, LAC, CI, IMSI = part[1].split(" "), part[3], part[4], part[5] 14 | pt1, pt2, pt3 = TTIME[0].split("-"), TTIME[1].split("."), TTIME[2] 15 | year, month, day, hour, minute, second = int("20"+pt1[2]), {"AUG":8}[pt1[1]], int(pt1[0]), int(pt2[0]), int(pt2[1]), int(pt2[2]) 16 | hour = hour if hour != 12 else 0 17 | hour = hour if pt3 == "AM" else hour+12 18 | secs = hour*3600+minute*60+second 19 | key = LAC+" "+CI 20 | sl = secs/3600 21 | if bss.has_key(key): 22 | bs = bss[key] 23 | lng, lat = bs["lng"], bs["lat"] 24 | if 120.02<=lng<120.48 and 30.15<=lat<=30.42: 25 | gx, gy = int((lng-120.02)/(120.48-120.02)*225), int((lat-30.15)/(30.42-30.15)*150) 26 | return (str(gx)+","+str(gy), sl, IMSI) 27 | else: 28 | return ("", -1, "") 29 | else: 30 | return ("", -1, "") 31 | except: 32 | return ("", -1, "") 33 | 34 | global bss 35 | 36 | if __name__ == "__main__": 37 | import fileinput 38 | bss = {} 39 | for line in fileinput.input("hz_base.txt"): 40 | part = line.strip().split(" ") 41 | num, lng, lat = part[1]+" "+part[2], float(part[3]), float(part[4]) 42 | bss[num] = {"lng":lng, "lat":lat} 43 | fileinput.close() 44 | conf = SparkConf().setMaster('yarn-client') \ 45 | .setAppName('qiangsiwei') \ 46 | .set('spark.driver.maxResultSize', "8g") 47 | sc = SparkContext(conf = conf) 48 | filename = "0819" 49 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 50 | counts = lines.map(lambda x : extract(x)) \ 51 | .filter(lambda x : x[0]!="" and x[1]!=-1 and x[2]!="") \ 52 | .distinct() \ 53 | .map(lambda x : ((x[0],x[1]),1)) \ 54 | .reduceByKey(add) \ 55 | .sortByKey() \ 56 | .map(lambda x : str(x[0][0])+" "+str(x[0][1])+" "+str(x[1])) 57 | output = counts.saveAsTextFile("./hangzhou/CFF/{0}_pos_hour_user#.csv".format(filename)) 58 | -------------------------------------------------------------------------------- /onspark/statistic_pos_user.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | from operator import add 5 | from pyspark import SparkConf 6 | from pyspark import SparkContext 7 | 8 | # pos_user# 9 | def extract(line): 10 | import time 11 | try: 12 | part = line.strip().replace('\"','').split(",") 13 | TTIME, LAC, CI, IMSI = part[1].split(" "), part[3], part[4], part[5] 14 | pt1, pt2, pt3 = TTIME[0].split("-"), TTIME[1].split("."), TTIME[2] 15 | year, month, day, hour, minute, second = int("20"+pt1[2]), {"AUG":8}[pt1[1]], int(pt1[0]), int(pt2[0]), int(pt2[1]), int(pt2[2]) 16 | hour = hour if hour != 12 else 0 17 | hour = hour if pt3 == "AM" else hour+12 18 | secs = hour*3600+minute*60+second 19 | key = LAC+" "+CI 20 | sl = secs/3600 21 | if bss.has_key(key): 22 | bs = bss[key] 23 | lng, lat = bs["lng"], bs["lat"] 24 | if 120.02<=lng<120.48 and 30.15<=lat<=30.42: 25 | gx, gy = int((lng-120.02)/(120.48-120.02)*90), int((lat-30.15)/(30.42-30.15)*60) 26 | return (str(gx)+","+str(gy), sl, IMSI) 27 | else: 28 | return ("", -1, "") 29 | else: 30 | return ("", -1, "") 31 | except: 32 | return ("", -1, "") 33 | 34 | global bss 35 | 36 | if __name__ == "__main__": 37 | import fileinput 38 | bss = {} 39 | for line in fileinput.input("hz_base.txt"): 40 | part = line.strip().split(" ") 41 | num, lng, lat = part[1]+" "+part[2], float(part[3]), float(part[4]) 42 | bss[num] = {"lng":lng, "lat":lat} 43 | fileinput.close() 44 | conf = SparkConf().setMaster('yarn-client') \ 45 | .setAppName('qiangsiwei') \ 46 | .set('spark.driver.maxResultSize', "8g") 47 | sc = SparkContext(conf = conf) 48 | filename = "0819" 49 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 50 | counts = lines.map(lambda x : extract(x)) \ 51 | .filter(lambda x : x[0]!="" and x[1]!=-1 and x[2]!="") \ 52 | .map(lambda x : (x[0],x[2]))\ 53 | .distinct() \ 54 | .map(lambda x : (x[0],1)) \ 55 | .reduceByKey(add) \ 56 | .sortByKey() \ 57 | .map(lambda x : str(x[0])+" "+str(x[1])) 58 | output = counts.saveAsTextFile("./hangzhou/CFF/{0}_pos_user#.csv".format(filename)) 59 | -------------------------------------------------------------------------------- /onspark/statistic_pos_usergroup.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | from operator import add 5 | from pyspark import SparkConf 6 | from pyspark import SparkContext 7 | 8 | # pos_usergroup 9 | def extract(line): 10 | import time 11 | try: 12 | part = line.strip().replace('\"','').split(",") 13 | TTIME, LAC, CI, IMSI = part[1].split(" "), part[3], part[4], part[5] 14 | pt1, pt2, pt3 = TTIME[0].split("-"), TTIME[1].split("."), TTIME[2] 15 | year, month, day, hour, minute, second = int("20"+pt1[2]), {"AUG":8}[pt1[1]], int(pt1[0]), int(pt2[0]), int(pt2[1]), int(pt2[2]) 16 | hour = hour if hour != 12 else 0 17 | hour = hour if pt3 == "AM" else hour+12 18 | secs = hour*3600+minute*60+second 19 | key = LAC+" "+CI 20 | sl = secs/(30*60) 21 | if bss.has_key(key): 22 | bs = bss[key] 23 | lng, lat = bs["lng"], bs["lat"] 24 | if 120.02<=lng<120.48 and 30.15<=lat<=30.42: 25 | gx, gy = int((lng-120.02)/(120.48-120.02)*90), int((lat-30.15)/(30.42-30.15)*60) 26 | return (str(gx)+","+str(gy), sl, IMSI) 27 | else: 28 | return ("", -1, "") 29 | else: 30 | return ("", -1, "") 31 | except: 32 | return ("", -1, "") 33 | 34 | global bss 35 | 36 | if __name__ == "__main__": 37 | import fileinput 38 | bss = {} 39 | for line in fileinput.input("hz_base.txt"): 40 | part = line.strip().split(" ") 41 | num, lng, lat = part[1]+" "+part[2], float(part[3]), float(part[4]) 42 | bss[num] = {"lng":lng, "lat":lat} 43 | fileinput.close() 44 | conf = SparkConf().setMaster('yarn-client') \ 45 | .setAppName('qiangsiwei') \ 46 | .set('spark.driver.maxResultSize', "8g") 47 | sc = SparkContext(conf = conf) 48 | filename = "0819" 49 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 50 | counts = lines.map(lambda x : extract(x)) \ 51 | .filter(lambda x : x[0]!="" and x[1]!=-1 and x[2]!="") \ 52 | .map(lambda x : ((x[0],x[1]),x[2]))\ 53 | .distinct() \ 54 | .groupByKey() \ 55 | .sortByKey() \ 56 | .map(lambda x : str(x[0][0])+" "+str(x[0][1])+" "+" ".join(list(x[1]))) 57 | output = counts.saveAsTextFile("./hangzhou/CFF/{0}_pos_30min_usergroup_grid.csv".format(filename)) 58 | -------------------------------------------------------------------------------- /onspark/statistic_user_center.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | from operator import add 5 | from pyspark import SparkConf 6 | from pyspark import SparkContext 7 | 8 | # user_center 9 | def center(x): 10 | cx, cy = float(sum([int(i.split(",")[0]) for i in x]))/len(x), float(sum([int(i.split(",")[1]) for i in x]))/len(x) 11 | return str(round(cx,2))+","+str(round(cy,2)) 12 | 13 | def extract(line): 14 | import time 15 | try: 16 | part = line.strip().replace('\"','').split(",") 17 | TTIME, LAC, CI, IMSI = part[1].split(" "), part[3], part[4], part[5] 18 | pt1, pt2, pt3 = TTIME[0].split("-"), TTIME[1].split("."), TTIME[2] 19 | year, month, day, hour, minute, second = int("20"+pt1[2]), {"AUG":8}[pt1[1]], int(pt1[0]), int(pt2[0]), int(pt2[1]), int(pt2[2]) 20 | hour = hour if hour != 12 else 0 21 | hour = hour if pt3 == "AM" else hour+12 22 | secs = hour*3600+minute*60+second 23 | key = LAC+" "+CI 24 | sl = secs/(30*60) 25 | if bss.has_key(key): 26 | bs = bss[key] 27 | lng, lat = bs["lng"], bs["lat"] 28 | if 120.02<=lng<120.48 and 30.15<=lat<=30.42: 29 | gx, gy = int((lng-120.02)/(120.48-120.02)*225), int((lat-30.15)/(30.42-30.15)*150) 30 | return (str(gx)+","+str(gy), sl, IMSI) 31 | else: 32 | return ("", -1, "") 33 | else: 34 | return ("", -1, "") 35 | except: 36 | return ("", -1, "") 37 | 38 | global bss 39 | 40 | if __name__ == "__main__": 41 | import fileinput 42 | bss = {} 43 | for line in fileinput.input("hz_base.txt"): 44 | part = line.strip().split(" ") 45 | num, lng, lat = part[1]+" "+part[2], float(part[3]), float(part[4]) 46 | bss[num] = {"lng":lng, "lat":lat} 47 | fileinput.close() 48 | conf = SparkConf().setMaster('yarn-client') \ 49 | .setAppName('qiangsiwei') \ 50 | .set('spark.driver.maxResultSize', "8g") 51 | sc = SparkContext(conf = conf) 52 | filename = "0819" 53 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 54 | counts = lines.map(lambda x : extract(x)) \ 55 | .filter(lambda x : x[0]!="" and x[1]!=-1 and x[2]!="") \ 56 | .map(lambda x : ((x[2],x[1]),x[0]))\ 57 | .distinct() \ 58 | .groupByKey() \ 59 | .map(lambda x : (x[0],center(list(x[1])))) \ 60 | .sortByKey() \ 61 | .map(lambda x : str(x[0][0])+" "+str(x[0][1])+" "+str(x[1])) 62 | output = counts.saveAsTextFile("./hangzhou/CFF/{0}_user_30min_center.csv".format(filename)) 63 | -------------------------------------------------------------------------------- /onspark/statistic_user_hour_distance.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import sys 4 | from operator import add 5 | from pyspark import SparkConf 6 | from pyspark import SparkContext 7 | 8 | # user_hour_distance2 9 | def rd(x): 10 | import math 11 | dist_max = 0 12 | if len(x) >= 2: 13 | for i in range(0,len(x)-1): 14 | for j in range(i+1,len(x)): 15 | p1, p2 = [float(k) for k in x[i].split(",")], [float(k) for k in x[j].split(",")] 16 | lng1,lat1,lng2,lat2 = p1[0],p1[1],p2[0],p2[1] 17 | earth_radius, radlat1, radlat2 = 6400*1000, lat1*math.pi/180.0, lat2*math.pi/180.0 18 | a, b = radlat1-radlat2, lng1*math.pi/180.0-lng2*math.pi/180.0 19 | s = 2*math.asin(math.sqrt(math.pow(math.sin(a/2),2)+math.cos(radlat1)*math.cos(radlat2)*math.pow(math.sin(b/2),2))) 20 | dist = int(abs(s*earth_radius)) 21 | dist_max = dist if dist > dist_max else dist_max 22 | return dist_max 23 | 24 | def extract(line): 25 | import time 26 | try: 27 | part = line.strip().replace('\"','').split(",") 28 | TTIME, LAC, CI, IMSI = part[1].split(" "), part[3], part[4], part[5] 29 | pt1, pt2, pt3 = TTIME[0].split("-"), TTIME[1].split("."), TTIME[2] 30 | year, month, day, hour, minute, second = int("20"+pt1[2]), {"AUG":8}[pt1[1]], int(pt1[0]), int(pt2[0]), int(pt2[1]), int(pt2[2]) 31 | hour = hour if hour != 12 else 0 32 | hour = hour if pt3 == "AM" else hour+12 33 | secs = hour*3600+minute*60+second 34 | key = LAC+" "+CI 35 | sl = secs/3600 36 | if bss.has_key(key): 37 | bs = bss[key] 38 | lng, lat = bs["lng"], bs["lat"] 39 | if 120.02<=lng<120.48 and 30.15<=lat<=30.42: 40 | return (str(lng)+","+str(lat), sl, IMSI) 41 | else: 42 | return ("", -1, "") 43 | else: 44 | return ("", -1, "") 45 | except: 46 | return ("", -1, "") 47 | 48 | global bss 49 | 50 | if __name__ == "__main__": 51 | import fileinput 52 | bss = {} 53 | for line in fileinput.input("hz_base.txt"): 54 | part = line.strip().split(" ") 55 | num, lng, lat = part[1]+" "+part[2], float(part[3]), float(part[4]) 56 | bss[num] = {"lng":lng, "lat":lat} 57 | fileinput.close() 58 | conf = SparkConf().setMaster('yarn-client') \ 59 | .setAppName('qiangsiwei') \ 60 | .set('spark.driver.maxResultSize', "8g") 61 | sc = SparkContext(conf = conf) 62 | filename = "0819" 63 | lines = sc.textFile("hdfs://namenode.omnilab.sjtu.edu.cn/user/qiangsiwei/hangzhou/original/{0}.csv".format(filename)) 64 | counts = lines.map(lambda x : extract(x)) \ 65 | .filter(lambda x : x[0]!="" and x[1]!=-1 and x[2]!="") \ 66 | .map(lambda x : ((x[2],x[1]),x[0]))\ 67 | .distinct() \ 68 | .groupByKey() \ 69 | .map(lambda x : (x[0],rd(list(x[1])))) \ 70 | .map(lambda x : str(x[0][0])+" "+str(x[0][1])+" "+str(x[1])) 71 | output = counts.saveAsTextFile("./hangzhou/CFF/{0}_user_hour_dist_bs.csv".format(filename)) 72 | -------------------------------------------------------------------------------- /基于移动网络流量日志的城市空间行为分析.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qiangsiwei/hangzhou_CCF/a423c78d9fc1a0c2aafda28ad3be9e165e90a02d/基于移动网络流量日志的城市空间行为分析.pdf --------------------------------------------------------------------------------