├── .ipynb_checkpoints ├── Computation_landscape_metrics-checkpoint.ipynb ├── Landuse_classification-Copy1-checkpoint.ipynb ├── Landuse_classification-Copy10-checkpoint.ipynb ├── Landuse_classification-Copy11-checkpoint.ipynb ├── Landuse_classification-Copy12-checkpoint.ipynb ├── Landuse_classification-Copy13-checkpoint.ipynb ├── Landuse_classification-Copy14-checkpoint.ipynb ├── Landuse_classification-Copy2-checkpoint.ipynb ├── Landuse_classification-Copy3-checkpoint.ipynb ├── Landuse_classification-Copy4-checkpoint.ipynb ├── Landuse_classification-Copy5-checkpoint.ipynb ├── Landuse_classification-Copy6-checkpoint.ipynb ├── Landuse_classification-Copy7-checkpoint.ipynb ├── Landuse_classification-Copy8-checkpoint.ipynb ├── Landuse_classification-Copy9-checkpoint.ipynb ├── Landuse_classification-checkpoint.ipynb ├── SOUKAINA_version_FINALE-checkpoint.ipynb └── rli_configfile_creation-checkpoint.ipynb ├── Computation_landscape_metrics.ipynb ├── LICENSE ├── README.md └── illustration ├── Ouaga_LC.jpg ├── Ouaga_Low_elevated_building_Patch_density.jpg ├── Ouaga_Shannon.jpg └── Ouaga_landuse_classif.jpg /.ipynb_checkpoints/SOUKAINA_version_FINALE-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "## Passage du Land cover au land use en utilisant des métriques paysagères" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": {}, 15 | "source": [ 16 | "Ce code est implémenté sous python 2.7 , en utilisant GRASS 7.2" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | " Définition de l'environnement de travail pour Grass " 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 12, 29 | "metadata": {}, 30 | "outputs": [ 31 | { 32 | "name": "stdout", 33 | "output_type": "stream", 34 | "text": [ 35 | "Import des librairies\n", 36 | "Import des librairies achevé !\n" 37 | ] 38 | } 39 | ], 40 | "source": [ 41 | "# Import de librairies\n", 42 | "print('Import des librairies')\n", 43 | "import os\n", 44 | "import sys\n", 45 | "import shutil\n", 46 | "import csv\n", 47 | "import string\n", 48 | "import itertools\n", 49 | "import random\n", 50 | "print('Import des librairies achevé !')" 51 | ] 52 | }, 53 | { 54 | "cell_type": "markdown", 55 | "metadata": {}, 56 | "source": [ 57 | "Définition des variables d'environnement de grass " 58 | ] 59 | }, 60 | { 61 | "cell_type": "code", 62 | "execution_count": 13, 63 | "metadata": { 64 | "collapsed": true 65 | }, 66 | "outputs": [], 67 | "source": [ 68 | "# Définition des variables d'environnement de GRASS\n", 69 | "gisbase = os.environ['GISBASE'] = 'C:/Program Files/GRASS GIS 7.2.svn'\n", 70 | "os.environ['PATH'] = 'C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\lib;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\bin;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\extrabin' + os.pathsep + os.environ['PATH']\n", 71 | "os.environ['PATH'] = 'C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\etc;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\etc\\\\python;C:\\\\Python27' + os.pathsep + os.environ['PATH']\n", 72 | "os.environ['PATH'] = 'C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\Python27;C:\\\\Users\\\\Stagiaire\\\\AppData\\\\Roaming\\\\GRASS7\\\\addons\\\\scripts' + os.pathsep + os.environ['PATH']\n", 73 | "os.environ['PATH'] = 'C:\\\\Program Files\\\\Anaconda2\\\\lib\\\\site-packages' + os.pathsep + os.environ['PATH']\n", 74 | "os.environ['PYTHONLIB'] = 'C:\\Python27'\n", 75 | "os.environ['PYTHONPATH'] = 'C:/Program Files/GRASS GIS 7.2.svn/etc/python'\n", 76 | "os.environ['GIS_LOCK'] = '$$'" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "metadata": {}, 82 | "source": [ 83 | " Définition de l'environnement GRASS-PYTHON " 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": 14, 89 | "metadata": {}, 90 | "outputs": [], 91 | "source": [ 92 | "# Définition de l'environnement GRASS-PYTHON\n", 93 | "sys.path.append(os.path.join(os.environ['GISBASE'],'etc','python'))" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": 15, 99 | "metadata": { 100 | "collapsed": true 101 | }, 102 | "outputs": [], 103 | "source": [ 104 | "# répertoire contenant le mapset\n", 105 | "gisdb = \"F:\\\\stagiaire\"" 106 | ] 107 | }, 108 | { 109 | "cell_type": "markdown", 110 | "metadata": {}, 111 | "source": [ 112 | "## Spécification d'une location existante et d'un mapset" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "La projection de la location devra être celle d'une couche utilisée par la suite (celle des ilots urbains ou autre...)\n", 120 | "Il faudra donc cocheer : \"Read projection and datum terms from a georeferenced data file\" lors de sa creation:\n" 121 | ] 122 | }, 123 | { 124 | "cell_type": "markdown", 125 | "metadata": {}, 126 | "source": [ 127 | "![illustration : ](images\\createLocation.png )" 128 | ] 129 | }, 130 | { 131 | "cell_type": "code", 132 | "execution_count": 16, 133 | "metadata": {}, 134 | "outputs": [], 135 | "source": [ 136 | "# specify (existing) location and mapset\n", 137 | "location = \"Glocation\"\n", 138 | "mapset = \"PERMANENT\"\n" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": 17, 144 | "metadata": {}, 145 | "outputs": [], 146 | "source": [ 147 | "# import GRASS Python bindings\n", 148 | "import grass.script as grass\n", 149 | "import grass.script.setup as gsetup\n", 150 | "import pandas as pd" 151 | ] 152 | }, 153 | { 154 | "cell_type": "markdown", 155 | "metadata": {}, 156 | "source": [ 157 | "## Lancement de la session" 158 | ] 159 | }, 160 | { 161 | "cell_type": "code", 162 | "execution_count": 18, 163 | "metadata": {}, 164 | "outputs": [ 165 | { 166 | "data": { 167 | "text/plain": [ 168 | "'c:\\\\users\\\\stagia~1\\\\appdata\\\\local\\\\temp\\\\tmpot5h40'" 169 | ] 170 | }, 171 | "execution_count": 18, 172 | "metadata": {}, 173 | "output_type": "execute_result" 174 | } 175 | ], 176 | "source": [ 177 | "# launch session\n", 178 | "gsetup.init(gisbase,gisdb, location, mapset)" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": {}, 184 | "source": [ 185 | "## Import des librairies" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": 19, 191 | "metadata": {}, 192 | "outputs": [ 193 | { 194 | "name": "stdout", 195 | "output_type": "stream", 196 | "text": [ 197 | "Import des librairies\n", 198 | "L import des librairies est achevé !\n" 199 | ] 200 | } 201 | ], 202 | "source": [ 203 | "print('Import des librairies')\n", 204 | "import grass.script as grass\n", 205 | "import time\n", 206 | "import pandas as pd\n", 207 | "import glob\n", 208 | "print('L import des librairies est achevé !')" 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": 20, 214 | "metadata": {}, 215 | "outputs": [ 216 | { 217 | "name": "stdout", 218 | "output_type": "stream", 219 | "text": [ 220 | "1470932917.53\n" 221 | ] 222 | } 223 | ], 224 | "source": [ 225 | "begintime=time.time()\n", 226 | "print(begintime)" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "## Definition des fonctions" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": {}, 239 | "source": [ 240 | "calculTemps calcule le temps de traitement et prend comme paramètre l'heure de début de traitement et une chaine de caractère qui correspond à une phrase à afficher expliquant le traitement effectué" 241 | ] 242 | }, 243 | { 244 | "cell_type": "code", 245 | "execution_count": 21, 246 | "metadata": { 247 | "collapsed": true 248 | }, 249 | "outputs": [], 250 | "source": [ 251 | "def calculTemps(tempbegintime, chaine):\n", 252 | " tempendtime=time.time()\n", 253 | " hours=int((tempendtime-tempbegintime)/3600)\n", 254 | " minutes=int(((tempendtime-tempbegintime)-(hours*3600))/60)\n", 255 | " seconds=round(((tempendtime-tempbegintime)-(minutes*60))%60,1)\n", 256 | " return chaine+str(hours)+\" heures, \"+str(minutes)+\" minutes et \"+str(seconds)+\" secondes\"" 257 | ] 258 | }, 259 | { 260 | "cell_type": "markdown", 261 | "metadata": {}, 262 | "source": [ 263 | "importVectorGrass importe un vecteur dans Grass et prend comme paramètre le lien vers le fichier shapefile et le nom du fichier de sortie. Si un fichier portant le même nom existe déjà sur GRASS, il sera écrasé." 264 | ] 265 | }, 266 | { 267 | "cell_type": "code", 268 | "execution_count": 22, 269 | "metadata": {}, 270 | "outputs": [], 271 | "source": [ 272 | "def importVectorGrass(lienDuVecteur, nomFichierDeSortie):\n", 273 | " grass.run_command('v.in.ogr', overwrite=True, input=lienDuVecteur, output=nomFichierDeSortie)" 274 | ] 275 | }, 276 | { 277 | "cell_type": "markdown", 278 | "metadata": {}, 279 | "source": [ 280 | "extractIlots extrait une zone d'étude à partir d'un polygone délimitant la zone, et d'une couche vectorielle qui contient la zone à extraire. Etant donné que Grass crée la table attributaire de la nouvelle couche vectorielle en joignant les colonnes des deux couches en entrée et en ajoutant un suffixe au nom de chaque colonne, et les colonnes ne sont pas toutes utiles, certaines sont supprimées et les autres sont renommées afin d'en supprimer le suffixe ajouté par Grass" 281 | ] 282 | }, 283 | { 284 | "cell_type": "code", 285 | "execution_count": 23, 286 | "metadata": { 287 | "collapsed": true 288 | }, 289 | "outputs": [], 290 | "source": [ 291 | "def extractIlots(LienVersPolygoneZE, NomCoucheIlotsUrbains):\n", 292 | " ## Import du polygone servant à extraire la zone d'étude \n", 293 | " importVectorGrass(LienVersPolygoneZE, \"ZE\")\n", 294 | " \n", 295 | " #extraction\n", 296 | " grass.run_command('v.overlay', ainput='ZE' , binput=NomCoucheIlotsUrbains, output='ZE_ILOTS_URBAINS', operator='and' ,overwrite= True, snap=-1)\n", 297 | " \n", 298 | " ##suppression des colonnes inutiles et renommage des autres\n", 299 | " grass.run_command('v.db.dropcolumn', map='ZE_ILOTS_URBAINS@PERMANENT',columns='b_cat')\n", 300 | " colonne_ilots=grass.parse_command('db.columns',table='ZE_ILOTS_URBAINS_1')\n", 301 | " liste_colonnes_ilots=[]\n", 302 | " for elem in colonne_ilots:\n", 303 | " liste_colonnes_ilots.append(str(elem))\n", 304 | " for nom in liste_colonnes_ilots:\n", 305 | " if nom[0] is'a' and nom[1] is'_':\n", 306 | " grass.run_command('v.db.dropcolumn', map='ZE_ILOTS_URBAINS@PERMANENT',columns=nom)\n", 307 | " elif nom[0] is'b' and nom[1] is'_':\n", 308 | " grass.run_command('v.db.renamecolumn', map='ZE_ILOTS_URBAINS@PERMANENT',column=nom+','+nom[2:len(nom)] )" 309 | ] 310 | }, 311 | { 312 | "cell_type": "markdown", 313 | "metadata": {}, 314 | "source": [ 315 | "nettoieDonnees ajoute une colonne \"aire\" à la table attributaire de la couche des ilots urbains. et la remplit avec la surface de chaque ilot.Ensuite, tous les ilots pour lesquels Grass n'arrive à calculer l'aire ( artefacts) ou ceux ayant une aire inférieure à 1 m² sont supprimés. ATTENTION : L'aire calculée dépend de la projection utilisée. Il faut alors que le système de coordonnées soit projeté et que l'unité utilisée soit le mètre." 316 | ] 317 | }, 318 | { 319 | "cell_type": "code", 320 | "execution_count": 24, 321 | "metadata": { 322 | "collapsed": true 323 | }, 324 | "outputs": [], 325 | "source": [ 326 | "def nettoieDonnees (NomCouche ):\n", 327 | " grass.run_command('v.db.addcolumn',map= NomCouche, columns = 'aire double precision' )\n", 328 | " grass.run_command('v.to.db', map=NomCouche+ '@PERMANENT', option='area', columns='aire')\n", 329 | " #suppression les ilots dont l'aire est inférieure à 1 car ils ne sont pas assez grands\n", 330 | " grass.run_command('db.execute', sql=\"DELETE FROM %s where %s\" % (NomCouche+'_1', \"aire<1\") )\n", 331 | " # Suppression des artefacts \n", 332 | " grass.run_command('db.execute', sql='DELETE FROM'+ NomCouche +'_1 where aire is Null ')\n", 333 | " " 334 | ] 335 | }, 336 | { 337 | "cell_type": "markdown", 338 | "metadata": {}, 339 | "source": [ 340 | "importRasterGrass importe un raster dans Grass et prend comme paramètre le lien vers le fichier et le nom du fichier de sortie. Si un fichier portant le même nom existe déjà sur GRASS, il sera écrasé." 341 | ] 342 | }, 343 | { 344 | "cell_type": "code", 345 | "execution_count": 25, 346 | "metadata": { 347 | "collapsed": true 348 | }, 349 | "outputs": [], 350 | "source": [ 351 | "def importRasterGrass(lienDuRaster, nomFichierDeSortie):\n", 352 | " grass.run_command('r.in.gdal', input=lienDuRaster, output=nomFichierDeSortie, overwrite=True, flags=\"o\")" 353 | ] 354 | }, 355 | { 356 | "cell_type": "markdown", 357 | "metadata": {}, 358 | "source": [ 359 | "importRasterGrass extrait la zone d'étude à partir de la couche de land cover à partir du polygone pécedemment importé dans Grass('ZE'). Rennomme les colonnes attributaires de la couche extraite et en supprime quelques unes" 360 | ] 361 | }, 362 | { 363 | "cell_type": "code", 364 | "execution_count": 26, 365 | "metadata": { 366 | "collapsed": true 367 | }, 368 | "outputs": [], 369 | "source": [ 370 | "def extractLandCover():\n", 371 | " grass.run_command('v.overlay', ainput='LAND_COVER' , binput='ZE', output='ZE_LAND_COVER', operator='and', overwrite= True)\n", 372 | " #Renommage des colonnes et suppression des colonnes inutiles\n", 373 | " grass.run_command('v.db.dropcolumn', map='ZE_LAND_COVER@PERMANENT',columns='a_cat')\n", 374 | "\n", 375 | " colonne_cover=grass.parse_command('db.columns',table='ZE_LAND_COVER_1')\n", 376 | " liste_colonnes_cover=[]\n", 377 | " for elem in colonne_cover:\n", 378 | " liste_colonnes_cover.append(str(elem))\n", 379 | " for nom in liste_colonnes_cover:\n", 380 | " if nom[0] is'a' and nom[1] is'_':\n", 381 | " grass.run_command('v.db.renamecolumn', map='ZE_LAND_COVER@PERMANENT',column=nom+','+nom[2:len(nom)] )\n", 382 | " elif nom[0] is'b' and nom[1] is'_':\n", 383 | " grass.run_command('v.db.dropcolumn', map='ZE_LAND_COVER@PERMANENT',columns=nom)\n", 384 | "\n", 385 | "\n", 386 | " " 387 | ] 388 | }, 389 | { 390 | "cell_type": "markdown", 391 | "metadata": {}, 392 | "source": [ 393 | " listeClasseLC retourne la liste des classes de land cover à partir de la colonne répertoriant toutes les classes de land cover dans la couche vectorielle du land cover" 394 | ] 395 | }, 396 | { 397 | "cell_type": "code", 398 | "execution_count": 27, 399 | "metadata": { 400 | "collapsed": true 401 | }, 402 | "outputs": [], 403 | "source": [ 404 | "def listeClasseLC(nomCoucheEtudeLC, colonneLc):\n", 405 | " list_class=[]\n", 406 | "\n", 407 | " classes= grass.parse_command('v.db.select', map=nomCoucheEtudeLC, columns=colonneLc, flags='c')\n", 408 | "\n", 409 | " for elem in classes:\n", 410 | " ##print type(elem)\n", 411 | " list_class.append(str(elem))\n", 412 | " \n", 413 | " ##print type(list_class[0])\n", 414 | "\n", 415 | " return list_class" 416 | ] 417 | }, 418 | { 419 | "cell_type": "markdown", 420 | "metadata": {}, 421 | "source": [ 422 | "extractRaster fournit en sortie un raster 1-null pour chaque classe de land cover (soit 10 rasters pour notre cas d’étude). Etant donné une classe de land cover ‘LC’, les pixels du raster 1-null extrait, pour la classe ‘LC’ valent 1 s’ils se superposent à cette même classe dans la couche de Land cover, et valent null sinon. Pour ce faire, la fonction définit une région dont l'étendue correspond à la zone d'étude et dont la résolution correspont à 25 cm. (Sans cela, le raster résultat contient des pixels de 300 m ). Ensuite, La liste des classes est parcourue pour extraire les polygones correspondant à la classe courante (Cette opération engendre automatiquement la modification des noms des colonnes dans la couche résultante. Cest pour cette raison qu'un changement de nom des colonnes a été effectué après l'extraction ). La couche résultante est alors transformée en rasteur dont la valeur des pixels sur les zones correspondant à la classe courante valent 1 et null sinon." 423 | ] 424 | }, 425 | { 426 | "cell_type": "code", 427 | "execution_count": 28, 428 | "metadata": {}, 429 | "outputs": [], 430 | "source": [ 431 | "def extractRaster(list_class):\n", 432 | " \n", 433 | " # définit une région dont l'étendue correspont à la zone d'étude et dont la résolution correspont à 25 cm\n", 434 | " grass.run_command('g.region',vector=vecteurRegion,res=resolution)\n", 435 | "\n", 436 | " for classe in list_class :\n", 437 | " condition=\"lc_code='\"+ classe+\"'\"\n", 438 | " outputvect=\"LC_\"+ classe\n", 439 | " outputrast=outputvect+\"_raster\"\n", 440 | " grass.run_command('v.extract', overwrite=True, input=nomCoucheEtudeLC,type=\"area\",where=condition, output=outputvect )\n", 441 | " grass.run_command('v.db.update',map=outputvect , column='code_int',value=1)\n", 442 | " \n", 443 | " #conversion du fichier au format raster\n", 444 | " grass.run_command('v.to.rast' , overwrite=True, input=outputvect, output=outputrast, use='attr',column='code_int', quiet=True) \n", 445 | " " 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "calculMetriques calcule toutes les métriques de GRASS dont les noms sont contenus dans une liste passée en paramètre , et stocke les métriques dans un fichier csv. Chaque fichier contient une métrique calculée pour chaque ilot de la zone étudiée, et sur un raster 1-null représentant une classe de la liste listClass passée en paramètre\n", 453 | "L'utilisateur devra créer manuellement sur GRASS les fichiers de configuration passés en paramètre à r.li.*. Et ce, pour chaque classe de land cover.Il devra IMPERATIVEMENT les nommer cf_nomDeLaClasse.Si la zone d'étude est changée, il faudra en recréer de nouveaux. Ces fichiers de configuration sont crées à l'aide de la commande [g.gui.rlisetup](https://grass.osgeo.org/grass72/manuals/g.gui.rlisetup.html) (sous Grass 7.2).\n", 454 | "Les métriques sont calculées ci-dssous pour chaque raster 1-null. Le résultat de ce calcul correspond à un fichier csv qui est mis par dafaut dans le répertoire suivant: C:\\Users\\Stagiaire\\AppData\\Roaming\\GRASS7\\r.li\\output. Pour des raisons pratiques, le fichier csv généré est ensuite copié dans un dossier nommé \"output\". Ce dernier se trouve dans le dossier \"classification_metriques\" du répertoire du grassData. Finalement, le fichier est remis en forme , en supprimant les \"|\" et les \"RESULT\" \n" 455 | ] 456 | }, 457 | { 458 | "cell_type": "code", 459 | "execution_count": 29, 460 | "metadata": {}, 461 | "outputs": [], 462 | "source": [ 463 | "def calculMetriques(listClass,listeMetriques):\n", 464 | " ''' dossierDestination= dossier dans lequel l'utilisateur veut copier les fichiers csv correspondant aux métriques calculées. si le dossier n'existe pas, il est crée \n", 465 | " listClass= liste des classes de land cover\n", 466 | " listeMetriques= liste des metriques de grass\n", 467 | " '''\n", 468 | " \n", 469 | " nomSession= os.environ[\"USERNAME\"]\n", 470 | " \n", 471 | " #Si le dossier de destination n'existe pas,il le crée\n", 472 | " if not os.path.exists(gisdb+'\\\\classification_metriques'):\n", 473 | " os.makedirs(gisdb+'\\\\classification_metriques')\n", 474 | " if not os.path.exists(gisdb+'\\\\classification_metriques\\\\output'):\n", 475 | " os.makedirs(gisdb+'\\\\classification_metriques\\\\output')\n", 476 | "\n", 477 | " index = { '|' : ' ','RESULT':'','\\n':''}\n", 478 | " \n", 479 | " #parcourt les classes\n", 480 | " for classe in listClass:\n", 481 | " #parcourt les métriques\n", 482 | " for metrique in listeMetriques:\n", 483 | " #définition du nom du fichier de sortie\n", 484 | " output=metrique+'_'+str(classe)+'.csv'\n", 485 | " grass.run_command('r.li.'+ metrique, input='LC_'+str(classe)+'_raster@PERMANENT' ,config=\"C:\\\\Users\\\\\"+nomSession+\"\\\\AppData\\\\Roaming\\\\GRASS7\\\\r.li\\\\conf_\"+str(classe), output=output ,overwrite=True)\n", 486 | " #copie du fichier crée par la commande GRASS dans le dossier de destination\n", 487 | " shutil.copy2('C:\\\\Users\\\\'+nomSession+'\\\\AppData\\\\Roaming\\\\GRASS7\\\\r.li\\\\output\\\\'+output , gisdb+'\\\\classification_metriques\\\\output'+'\\\\'+output)\n", 488 | " fichier=open(gisdb+'\\\\classification_metriques\\\\output'+'\\\\'+output, \"r\")\n", 489 | " contenu=fichier.read()\n", 490 | " #Supprime les \"|\" et les \"RESULT\"\n", 491 | " for cle in index: \n", 492 | " contenu=contenu.replace(cle, index[cle])\n", 493 | " fichier.close()\n", 494 | " \n", 495 | " #Met tous les mots du contenu nettoyé dans une liste\n", 496 | " elements = string.split(contenu)\n", 497 | " \n", 498 | " #écrase l'ancien fichier et y organise les informations \n", 499 | " fichier=open(gisdb+'\\\\classification_metriques\\\\output'+'\\\\'+output, 'w')\n", 500 | " with fichier as csvfile:\n", 501 | " spamwriter = csv.writer(csvfile, delimiter=';')\n", 502 | " fieldnames = ['cat', classe+'_'+metrique]\n", 503 | " writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n", 504 | " writer.writeheader()\n", 505 | " j=0\n", 506 | " while((2*j+1) < len(elements)):\n", 507 | " writer.writerow({'cat':float(elements[2*j]), classe+'_'+metrique:elements[2*j+1]})\n", 508 | " j=j+1\n", 509 | "\n", 510 | " fichier.close()" 511 | ] 512 | }, 513 | { 514 | "cell_type": "markdown", 515 | "metadata": {}, 516 | "source": [ 517 | "![illustration : ](images\\calculMetriques.png )" 518 | ] 519 | }, 520 | { 521 | "cell_type": "markdown", 522 | "metadata": {}, 523 | "source": [ 524 | "extractRasterAll extrait un raster dont chaque pixel vaut un nombre qui a été associé à la classe à laquelle appartient ce pixel.\n", 525 | "(le pixel vaut donc n s'il appartient à la classe n et il il vaut n+1 s'il appartient à la classe n+1...)" 526 | ] 527 | }, 528 | { 529 | "cell_type": "code", 530 | "execution_count": 30, 531 | "metadata": { 532 | "collapsed": true 533 | }, 534 | "outputs": [], 535 | "source": [ 536 | "def extractRasterAll():\n", 537 | " #définition de la région et de la résolution \n", 538 | " grass.run_command('g.region',vector=vecteurRegion,res=resolution)\n", 539 | " # Ajout d'une colonne \"code_int\" à la table attributaire du land cover\n", 540 | " grass.run_command('v.db.addcolumn',map= nomCoucheEtudeLC , columns = 'code_int INT' )\n", 541 | " j=1\n", 542 | " for classes in list_class:\n", 543 | " condt= \"lc_code='\"+ classes+\"'\"\n", 544 | " grass.run_command('v.db.update',map=nomCoucheEtudeLC ,column='code_int',value=j, where=condt)\n", 545 | " j+=1\n", 546 | " grass.run_command('g.region',vector=vecteurRegion,res=resolution)\n", 547 | " grass.run_command('v.to.rast' , overwrite=True, input=nomCoucheEtudeLC+'@PERMANENT', output='raster_all_classes', use='attr',column='code_int', quiet=True) " 548 | ] 549 | }, 550 | { 551 | "cell_type": "markdown", 552 | "metadata": {}, 553 | "source": [ 554 | "calculMetriquesAll calcule les métriques sur le raster dont la valeur des pixels est un entier représentant une classe de land cover. La démarche est la même que celle de calculMetriques sauf que là , il n'y a pas besoin de parcourir la liste des classes. Un autre fichier de configuration devra être cée manuellement pour cette fonction. Ce fichier de configuration est celui du raster complet (celui extrait par la fonction extractRasterAll() )" 555 | ] 556 | }, 557 | { 558 | "cell_type": "code", 559 | "execution_count": 31, 560 | "metadata": {}, 561 | "outputs": [], 562 | "source": [ 563 | "def calculMetriquesAll(listeMetriques ):\n", 564 | " ''' dossierDestination= dossier dans lequel l'utilisateur veut copier les fichiers csv correspondant aux métriques calculées. si le dossier n'existe pas, il est crée \n", 565 | " listClass= liste des classes de land cover\n", 566 | " listeMetriques= liste des metriques de grass\n", 567 | " nomSession= nom de la session windows courante'''\n", 568 | " \n", 569 | " nomSession = os.environ['USERNAME']\n", 570 | "\n", 571 | "\n", 572 | " #Si le dossier de destination n'existe pas,il le crée\n", 573 | " if not os.path.exists(gisdb+'\\\\classification_metriques'):\n", 574 | " os.makedirs(gisdb+'\\\\classification_metriques')\n", 575 | " if not os.path.exists(gisdb+'\\\\classification_metriques\\\\output'):\n", 576 | " os.makedirs(gisdb+'\\\\classification_metriques\\\\output')\n", 577 | "\n", 578 | " index = { '|' : ' ','RESULT':'','\\n':''}\n", 579 | " for metrique in listeMetriques:\n", 580 | " output='all_'+metrique+'.csv'\n", 581 | " \n", 582 | " #la métrique renyi exige un paramètre supplémentaire : alpha \n", 583 | " if metrique =='renyi':\n", 584 | " grass.run_command('r.li.'+ metrique, input='raster_all_classes@PERMANENT' ,config=\"C:\\\\Users\\\\\"+nomSession+\"\\\\AppData\\\\Roaming\\\\GRASS7\\\\r.li\\\\conf_all_classes\", alpha= 2, output=output ,overwrite=True)\n", 585 | " else: \n", 586 | " grass.run_command('r.li.'+ metrique, input='raster_all_classes@PERMANENT' ,config=\"C:\\\\Users\\\\\"+nomSession+\"\\\\AppData\\\\Roaming\\\\GRASS7\\\\r.li\\\\conf_all_classes\", output=output ,overwrite=True)\n", 587 | " #copie du fichier crée par la commande GRASS dans le dossier de destination\n", 588 | " shutil.copy2('C:\\\\Users\\\\'+nomSession+'\\\\AppData\\\\Roaming\\\\GRASS7\\\\r.li\\\\output\\\\'+output , gisdb+'\\\\classification_metriques\\\\output'+'\\\\'+output)\n", 589 | " fichier=open(gisdb+'\\\\classification_metriques\\\\output'+'\\\\'+output, \"r\")\n", 590 | " contenu=fichier.read()\n", 591 | " #remplace les | par ; et les RESULT par rien\n", 592 | " for cle in index: \n", 593 | " contenu=contenu.replace(cle, index[cle])\n", 594 | " fichier.close()\n", 595 | " \n", 596 | " #Met tous les mots du contenu nettoyé dans une liste\n", 597 | " elements = string.split(contenu)\n", 598 | " \n", 599 | " #écrase l'ancien fichier et y organise les informations\n", 600 | " fichier=open(gisdb+'\\\\classification_metriques\\\\output'+'\\\\'+output, 'w')\n", 601 | " with fichier as csvfile:\n", 602 | " spamwriter = csv.writer(csvfile, delimiter=';')\n", 603 | " fieldnames = ['cat', metrique]\n", 604 | " writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n", 605 | " writer.writeheader()\n", 606 | " j=0\n", 607 | " while((2*j+1) < len(elements)):\n", 608 | " writer.writerow({'cat':float(elements[2*j]), metrique:elements[2*j+1]})\n", 609 | " j=j+1\n", 610 | " \n", 611 | " fichier.close()" 612 | ] 613 | }, 614 | { 615 | "cell_type": "markdown", 616 | "metadata": {}, 617 | "source": [ 618 | "CreationCsvMetriques crée un fichier csv contenant deux colonnes extraites de la table attributaire de la couche des ilots urbains: cat (colonne créee automatiquement lors de l'import des données sur grass) et id_ilot (correspondant à l'identifiant de l'ilot). elle effectue ensuite la jointure entre ce fichier csv et tous les autres csv correpondants aux metriques calculées en utilisant la colonne 'cat' qui est la colonne commune entre ces tables.\n", 619 | "Une fois la jointure faite, le tableau est exporté sous format csv. Il est ensuite nettoyée en supprimant les lignes n'ayant pas d'id_ilots car celles ci correspondent aux lignes supprimées par la fonction nettoieDonnées. Puis, des fichiers csvt sont crées afin de déterminer le type des champs qui sont dans ce cas des Real" 620 | ] 621 | }, 622 | { 623 | "cell_type": "code", 624 | "execution_count": 32, 625 | "metadata": {}, 626 | "outputs": [], 627 | "source": [ 628 | "def CreationCsvMetriques( ):\n", 629 | " \n", 630 | " '''liste_chemins représente las listes des chemins vers les fichiers csv'''\n", 631 | " '''cette fonction effectue la jointure entre les differents fichiers csv et l'exporte sous forme d'un fichier csv .Ensuite, elle le nettoie en éliminant les artefacts'''\n", 632 | " \n", 633 | " liste_chemins=glob.glob(gisdb+'\\\\classification_metriques\\\\output\\\\*.csv')\n", 634 | " \n", 635 | " cheminIdCat=gisdb+'\\\\classification_metriques\\\\output\\\\id_cat'\n", 636 | " if not os.path.exists(cheminIdCat):\n", 637 | " os.makedirs(cheminIdCat)\n", 638 | " \n", 639 | " dossierDestinationCsv= gisdb+'\\\\classification_metriques\\\\metriquesIlots'\n", 640 | " if not os.path.exists(dossierDestinationCsv):\n", 641 | " os.makedirs(dossierDestinationCsv)\n", 642 | " \n", 643 | " grass.run_command('db.out.ogr', overwrite=True, input=nomCoucheEtudeIlots+'@PERMANENT' ,output=cheminIdCat +'\\\\id_cat.csv', format=\"CSV\")\n", 644 | " id_cat=pd.read_csv(cheminIdCat+'\\\\id_cat.csv', sep=',',header=0)\n", 645 | " id_cat.to_csv(path_or_buf=cheminIdCat+'\\\\id_cat.csv', columns=['cat','ID_ILOT'] ,sep=',', header=True, quoting=None, decimal='.', index=False)\n", 646 | " id_cat=pd.read_csv(cheminIdCat+'\\\\id_cat.csv', sep=',',header=0)\n", 647 | " id_cat.head()\n", 648 | " \n", 649 | " #jointure du fichier id_cat avec tous les autres fichiers contenant toutes les valeurs des métriques\n", 650 | " pandadataframe=pd.read_csv(cheminIdCat+'\\\\id_cat.csv', sep=',',header=0)\n", 651 | " for elem in liste_chemins:\n", 652 | " pdf=pd.read_csv(elem, sep=',',header=0)\n", 653 | " pandadataframe= pd.merge(pandadataframe, pdf, on='cat')\n", 654 | " \n", 655 | " \n", 656 | " #export en tant que fichier csv\n", 657 | " pandadataframe.to_csv(path_or_buf=dossierDestinationCsv+'\\\\metriquesParIlot.csv', sep=',', header=True, quoting=None, decimal='.', index=False, overwrite=True )\n", 658 | " \n", 659 | " #supprime les lignes n'ayant pas d'ilot, celles-ci peuvent résulter des ilots supprimés à cause des doublons générés après l'extraction de la zone d'étude\n", 660 | " with open(dossierDestinationCsv+'\\\\metriquesParIlot.csv', 'rb') as csvfile:\n", 661 | " new_rows_list = []\n", 662 | " spamreader = csv.reader(csvfile, delimiter=',', quotechar='|')\n", 663 | " for row in spamreader:\n", 664 | " if row[1] != '':\n", 665 | " new_rows_list.append(row)\n", 666 | " csvfile.close()\n", 667 | " file2 = open(dossierDestinationCsv+'\\\\metriquesParIlot.csv', 'wb')\n", 668 | " writer = csv.writer(file2)\n", 669 | " writer.writerows(new_rows_list)\n", 670 | " file2.close()\n", 671 | " \n", 672 | " \n", 673 | " \n", 674 | " #Convertie les colonnes correspondant aux metriques à des réels en créant un fichier csvt\n", 675 | " csvFile=open(dossierDestinationCsv+'\\\\metriquesParIlot.csv', 'r')\n", 676 | " reader=csv.reader(csvFile, delimiter=',', quotechar='|')\n", 677 | " listeTest=list(reader)\n", 678 | " fichier=open(dossierDestinationCsv+'\\\\metriquesParIlot.csv'+'t', 'w')\n", 679 | " fichier.write('\"Real\"')\n", 680 | " c=len(listeTest[0])-1\n", 681 | " fichier.write(',\"Real\"'*c)\n", 682 | " fichier.close()\n", 683 | " csvFile.close()\n", 684 | " \n", 685 | " " 686 | ] 687 | }, 688 | { 689 | "cell_type": "markdown", 690 | "metadata": {}, 691 | "source": [ 692 | "entrValidClasse prend en paramètre une couche vectorielle correspondant à la vérité terrain , le nom de la colonne répertoriant la classe de land use de chaque ilot et un suffixe facultatif ajouté aux noms des fichiers de sortie qui sont nommés \"entrainement\" , \"validation\" et \"aclasser\" .Ces fichiers de sortie sont des fichiers csv correspondant aux ilots de validation , d'entrainement et des ilots à classer. " 693 | ] 694 | }, 695 | { 696 | "cell_type": "code", 697 | "execution_count": 33, 698 | "metadata": {}, 699 | "outputs": [], 700 | "source": [ 701 | "def entrValidClasser(lienVersShpEV, nomColonneCLU, nomColonneIdIlot,suffixe=''):\n", 702 | " '''\n", 703 | " Cette fonction prend en entrée :\n", 704 | " \n", 705 | " lienVersShpEV: lien vers le fichier shp contenant les ilots d'entrainement/validation\n", 706 | " lienVersCsvMetriques : lien vers le fichier csv contenant toutes les métriques calculées. 'nomColonneCLU' devra etre la 1 ere colonne de ce fichier\n", 707 | " nomColonneCLU : nom de la colonne correspondant à la classe de land use dans la table attributaire du vecteur contenant les ilots d'entrainement/validation \n", 708 | " nomcolonneIdIlot : nom de la colonne correspondant à l'id des ilots, dans le fichier csv où sont repertoriés toutes les métriques et dans la tables attributaire du vecteur entrainement/validation\n", 709 | " \n", 710 | " \n", 711 | " '''\n", 712 | " lienVersCsvMetriques= gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\metriquesParIlot.csv'\n", 713 | " \n", 714 | " #import des données d'entrainement/validation\n", 715 | " importVectorGrass(lienVersShpEV, 'entrainement_validation')\n", 716 | " \n", 717 | " #export de la table attributaire du vecteur entrainement/validation sous la forme d'un fichier csv \n", 718 | " grass.run_command('db.out.ogr', overwrite=True, input=\"entrainement_validation\",output=gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement_validation_table.csv', format=\"CSV\")\n", 719 | " \n", 720 | " #extraction des deux colonnes 'ID_ILOT' et 'class' sous forme d'un fichier csv nommé 'entrainement_validation_table'\n", 721 | " entrainement_validation_table=pd.read_csv(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement_validation_table.csv', sep=',',header=0)\n", 722 | " entrainement_validation_table.to_csv(path_or_buf=gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement_validation_table.csv', columns=[nomColonneIdIlot,nomColonneCLU] ,sep=',', header=True, quoting=None, decimal='.', index=False)\n", 723 | " \n", 724 | " #jointure du fichier 'entrainement_validation_table' et de celui contenant toutes les métriques calculées en utilisant la colonne répertoriant l'id de l'ilot (nomColonneIdIlot)\n", 725 | " pdf=pd.read_csv(lienVersCsvMetriques , sep=',',header=0)\n", 726 | " entrainement_validation_table=pd.read_csv(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement_validation_table.csv', sep=',',header=0)\n", 727 | " pandadataframe= pd.merge( pdf,entrainement_validation_table, on=nomColonneIdIlot )\n", 728 | " pandadataframe.to_csv(path_or_buf=gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\metriques_entrainement.csv' ,sep=',', header=True, quoting=None, decimal='.', index=False)\n", 729 | " \n", 730 | " #initialisation des listes\n", 731 | " list_entr_valid=[]\n", 732 | " list_validation_temp=[]\n", 733 | " list_validation=[]\n", 734 | " list_entrainement=[]\n", 735 | " list_cl_use=[]\n", 736 | " \n", 737 | " #Ajout de toutes les classes de land use à la liste \"list_cl\"\n", 738 | " classe_use=grass.parse_command('v.db.select', map='entrainement_validation', columns=nomColonneCLU, flags='c')\n", 739 | " for elem in classe_use :\n", 740 | " ##print type(elem)\n", 741 | " c= str(elem)\n", 742 | " list_cl_use.append(str(elem))\n", 743 | "\n", 744 | " #Ajout de tous les id des ilots d'entrainement/validation à la liste \"list-entr-valid\"\n", 745 | " id_ilots_entrainement_validation= grass.parse_command('v.db.select', map='entrainement_validation', columns=nomColonneIdIlot, flags='c')\n", 746 | " for elem in id_ilots_entrainement_validation :\n", 747 | " ##print type(elem)\n", 748 | " list_entr_valid.append(str(float(elem)))\n", 749 | "\n", 750 | " #Parcourt des classes de land use . \n", 751 | " for element in list_cl_use:\n", 752 | " #Choix aléatoire de 6 ids d'ilots par classe de land use , et ajout de ces ids à la liste \"list_validation\"\n", 753 | " validation=grass.parse_command('v.db.select', map='entrainement_validation',where=nomColonneCLU+'='+element , columns=nomColonneIdIlot, flags='c')\n", 754 | " list_of_random_items = random.sample(validation, 6)\n", 755 | " list_validation_temp=list_validation_temp+list_of_random_items\n", 756 | " for val in list_validation_temp:\n", 757 | " list_validation.append(str(float(val)))\n", 758 | "\n", 759 | " #selection des ids de la liste d'entrainement/validation n'appartenant pas à la liste des ids d'ilots de validation et ajout de ceux ci à list-entrainement\n", 760 | " for eleme in list_entr_valid:\n", 761 | " #print eleme\n", 762 | " if not(eleme in list_validation):\n", 763 | " list_entrainement.append(str(eleme))\n", 764 | " \n", 765 | " liste_lignes_entrainement = []\n", 766 | " liste_lignes_validation=[]\n", 767 | " liste_lignes_classer=[]\n", 768 | " \n", 769 | " #selection des lignes correspondant aux ilots d'entrainement à l'aide de l'id se trouvant dans la deuxième colonne du fichier csv metriques_entrainement\n", 770 | " # et ajout de celles-ci à \"liste_lignes-entrainement\"\n", 771 | " with open(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\metriques_entrainement.csv', 'rb') as csvfile:\n", 772 | " spamreader = csv.reader(csvfile, delimiter=',', quotechar='|')\n", 773 | " print spamreader\n", 774 | " k=0\n", 775 | " for row in spamreader:\n", 776 | " if (row[1]+'.0' in list_entrainement) or k==0:\n", 777 | " k=k+1\n", 778 | " liste_lignes_entrainement.append(row)\n", 779 | " csvfile.close()\n", 780 | " \n", 781 | " #remplace les trous par des zeros\n", 782 | " for elem in liste_lignes_entrainement:\n", 783 | " for index, item in enumerate(elem):\n", 784 | " if item == '':\n", 785 | " elem[index] = '0'\n", 786 | "\n", 787 | " #selection des lignes correspondant aux ilots de validation à l'aide de l'id se trouvant dans la deuxième colonne du fichier csv metriques_entrainement\n", 788 | " # et ajout de celles-ci à \"liste_lignes_validation\" \n", 789 | " with open(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\metriques_entrainement.csv', 'rb') as csvfile:\n", 790 | " spamreader1 = csv.reader(csvfile, delimiter=',', quotechar='|')\n", 791 | " k=0\n", 792 | " for row in spamreader1:\n", 793 | " if (row[1]+'.0' in list_validation) or (k==0) :\n", 794 | " k=k+1\n", 795 | " liste_lignes_validation.append(row) \n", 796 | " csvfile.close()\n", 797 | " \n", 798 | " #remplace les trous par des zeros dans liste_lignes_validation\n", 799 | " for elem in liste_lignes_validation:\n", 800 | " for index, item in enumerate(elem):\n", 801 | " if item == '':\n", 802 | " elem[index] = '0'\n", 803 | " \n", 804 | " \n", 805 | " #Ajout de toutes les lignes du fichier csv crée par la fonction creationCsvMetriques à la liste \"liste_lignes_classer\"\n", 806 | " with open(lienVersCsvMetriques, 'rb') as csvfile:\n", 807 | " sreader = csv.reader(csvfile, delimiter=',', quotechar='|')\n", 808 | " l=0\n", 809 | " for row in sreader:\n", 810 | " liste_lignes_classer.append(row)\n", 811 | " l=l+1\n", 812 | " csvfile.close()\n", 813 | "\n", 814 | " \n", 815 | " #remplace les trous par des zeros dans liste_lignes_classer\n", 816 | " for elem in liste_lignes_classer:\n", 817 | " for index, item in enumerate(elem):\n", 818 | " if item == '':\n", 819 | " elem[index] = '0'\n", 820 | " \n", 821 | " \n", 822 | " #Création des fichiers csv à partir des listes de lignes précédemment définies\n", 823 | " \n", 824 | " entrainement= open(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement'+suffixe+'.csv', 'wb')\n", 825 | " writer = csv.writer(entrainement)\n", 826 | " writer.writerows(liste_lignes_entrainement)\n", 827 | " entrainement.close()\n", 828 | "\n", 829 | " validation= open(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\validation'+suffixe+'.csv', 'wb')\n", 830 | " writer = csv.writer(validation)\n", 831 | " writer.writerows(liste_lignes_validation)\n", 832 | " validation.close()\n", 833 | "\n", 834 | " aclasser = open(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\ilots_classer'+suffixe+'.csv', 'wb')\n", 835 | " writer = csv.writer(aclasser)\n", 836 | " writer.writerows(liste_lignes_classer)\n", 837 | " aclasser.close()\n", 838 | " \n", 839 | " #Suppression de la colonne \"ID_ILOT\" de chaque fichier \n", 840 | " \n", 841 | " entrainement =pd.read_csv(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement'+suffixe+'.csv',sep=',',header=0)\n", 842 | " del entrainement['ID_ILOT']\n", 843 | " entrainement.to_csv(path_or_buf=gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement'+suffixe+'.csv', sep=',', header=True, quoting=None, decimal='.', index=False)\n", 844 | " \n", 845 | " validation =pd.read_csv(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\validation'+suffixe+'.csv',sep=',',header=0)\n", 846 | " del validation['ID_ILOT']\n", 847 | " validation.to_csv(path_or_buf=gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\validation'+suffixe+'.csv',sep=',', header=True, quoting=None, decimal='.', index=False)\n", 848 | " \n", 849 | " aclasser=pd.read_csv(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\ilots_classer'+suffixe+'.csv',sep=',',header=0)\n", 850 | " del aclasser['ID_ILOT']\n", 851 | " aclasser.to_csv(path_or_buf=gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\ilots_classer'+suffixe+'.csv', sep=',', header=True, quoting=None, decimal='.', index=False)\n", 852 | "\n", 853 | " #Suppression des éléments n'ayant plus aucune utilité\n", 854 | " os.remove(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\entrainement_validation_table.csv')\n", 855 | " os.remove(gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\metriques_entrainement.csv')\n", 856 | " grass.run_command('g.remove', type='vector', name='entrainement_validation@PERMANENT', flags='f')\n" 857 | ] 858 | }, 859 | { 860 | "cell_type": "markdown", 861 | "metadata": {}, 862 | "source": [ 863 | "classifier prend en entrée un paramètre facultatif qui correspond à un suffixe ajouté au nom des fichiers de sortie. Ce paramètre, s'il est fourni, devra être le même que celui fourni dans toutes les fonctions de ce script qui acceptent un paramètre nommé \"suffixe\". Cette fonction produit un csv contenant 3 colonnes : id, rf (classification issue du random forest) et \"vote_smv\"( celle-ci n'a pas de grand intérêt dans ce cas car la classe du vote majoritaire correspond à celle du rf puisque cette dernière est la seule calculée.)" 864 | ] 865 | }, 866 | { 867 | "cell_type": "code", 868 | "execution_count": 34, 869 | "metadata": {}, 870 | "outputs": [], 871 | "source": [ 872 | "def classifier(suffixe=''):\n", 873 | " gisdb2=gisdb.replace('\\\\','//')\n", 874 | " dossierResultats=gisdb2+'//classification_metriques//results' \n", 875 | "\n", 876 | " if not os.path.exists(dossierResultats):\n", 877 | " os.makedirs(dossierResultats) \n", 878 | " \n", 879 | " grass.run_command('v.class.mlR', flags=\"fi\", overwrite=True, \n", 880 | " separator=\"comma\",\n", 881 | " segments_file=gisdb2+'//classification_metriques//metriquesIlots//ilots_classer'+suffixe+'.csv', \n", 882 | " training_file=gisdb2+'//classification_metriques//metriquesIlots//entrainement'+suffixe+'.csv', \n", 883 | " train_class_column=\"class\",\n", 884 | " output_class_column=\"vote\", \n", 885 | " output_prob_column=\"prob\", \n", 886 | " classifiers=\"rf\", \n", 887 | " folds=\"5\", \n", 888 | " partitions=\"10\", \n", 889 | " tunelength=\"10\", \n", 890 | " weighting_modes=\"smv\", \n", 891 | " weighting_metric=\"accuracy\", \n", 892 | " classification_results=dossierResultats+'//results'+suffixe+'.csv', \n", 893 | " accuracy_file=dossierResultats+'//accuracy'+suffixe+'.csv', \n", 894 | " model_details=dossierResultats+'//details'+suffixe+'.txt', \n", 895 | " bw_plot_file=dossierResultats+'//box_whisker'+suffixe+'.png',\n", 896 | " r_script_file=dossierResultats+'//Rscript_mlR'+suffixe+'.R', \n", 897 | " processes=\"3\")\n", 898 | " " 899 | ] 900 | }, 901 | { 902 | "cell_type": "markdown", 903 | "metadata": {}, 904 | "source": [ 905 | "valider prend en entrée un paramètre facultatif qui correspond à un suffixe ajouté au nom du fichier de sortie. Ce paramètre, s'il est fourni, devra être le même que celui fourni dans toutes les fonctions de ce script qui acceptent un paramètre nommé \"suffixe\". Cette fonction produit un csv où est repertorié la matrice de confusion, l'exactitude, le kappa et le pourcentage d'omission , de comission mais aussi tous les kappas par classe" 906 | ] 907 | }, 908 | { 909 | "cell_type": "code", 910 | "execution_count": 35, 911 | "metadata": { 912 | "collapsed": true 913 | }, 914 | "outputs": [], 915 | "source": [ 916 | "def valider ( suffixe=''):\n", 917 | " cheminVersCsvClassif=gisdb+'classification_metriques\\\\results\\\\results'+suffixe+'.csv'\n", 918 | " cheminVersCsvValidation=gisdb+'\\\\classification_metriques\\\\metriquesIlots\\\\validation'+suffixe+'.csv'\n", 919 | " lienFichierSortie=gisdb+'\\\\classification_metriques\\\\validation\\\\EMK'+suffixe+'.csv'\n", 920 | " \n", 921 | " #définition de la région\n", 922 | " grass.run_command('g.region',vector=vecteurRegion,res=resolution)\n", 923 | " \n", 924 | " #encodage des valeurs en real - RESULTS\n", 925 | " csvFile=open(cheminVersCsvClassif, 'r')\n", 926 | " reader=csv.reader(csvFile, delimiter=',', quotechar='|')\n", 927 | " liste_results=list(reader)\n", 928 | " fichier=open(cheminVersCsvClassif+'t', 'w')\n", 929 | " fichier.write('\"real\"')\n", 930 | " #print len(listeTest)\n", 931 | " c=len(liste_results[0])-1\n", 932 | " fichier.write(',\"real\"'*c)\n", 933 | " fichier.close()\n", 934 | " csvFile.close()\n", 935 | " \n", 936 | " \n", 937 | " #encodage des valeurs en real - VALIDATION\n", 938 | " csvFile=open(cheminVersCsvValidation, 'r')\n", 939 | " reader=csv.reader(csvFile, delimiter=',', quotechar='|')\n", 940 | " liste_results=list(reader)\n", 941 | " fichier=open(cheminVersCsvValidation+'t', 'w')\n", 942 | " fichier.write('\"real\"')\n", 943 | " #print len(listeTest)\n", 944 | " c=len(liste_results[0])-1\n", 945 | " fichier.write(',\"real\"'*c)\n", 946 | " fichier.close()\n", 947 | " csvFile.close()\n", 948 | " \n", 949 | " #import du fichier dans Grass et jointure de celle-ci avec la couche des ilots urbains - RESULTS\n", 950 | " grass.run_command('db.in.ogr', overwrite=True, input=cheminVersCsvClassif, output='results')\n", 951 | " grass.run_command('v.db.join', map=nomCoucheEtudeIlots+'@PERMANENT', column='cat', other_table='results', other_column='id')\n", 952 | " \n", 953 | " \n", 954 | " #import du fichier dans Grass et jointure de celle-ci avec la couche des ilots urbains - VALIDATION\n", 955 | " grass.run_command('db.in.ogr', overwrite=True, input=cheminVersCsvValidation, output='validation')\n", 956 | " grass.run_command('v.db.join', map=nomCoucheEtudeIlots+'@PERMANENT', column='cat', other_table='validation', other_column='cat_')\n", 957 | " \n", 958 | " #transform polygone en centroïde + presence table attributaire \n", 959 | " grass.run_command('v.extract', overwrite=True, input=nomCoucheEtudeIlots+'@PERMANENT', type=\"centroid\", where=\"class IS NOT NULL\", output=\"VALIDATION_ILOTS_URBAINS_CENTRO\")\n", 960 | " grass.run_command('v.type' ,overwrite=True, input='VALIDATION_ILOTS_URBAINS_CENTRO@PERMANENT', output='VALIDATION_ILOTS_URBAINS_POINT@PERMANENT', from_type='centroid', to_type='point')\n", 961 | " \n", 962 | " #creation d'un raster dont les valeurs des pixels valent la classe issue du random forest - RESULTS\n", 963 | " grass.run_command('v.to.rast' , overwrite=True, input='VALIDATION_ILOTS_URBAINS_POINT@PERMANENT', output='classif', use='attr', column='rf', quiet=True)\n", 964 | " #remplace les pixels de valeur zero par des nulls\n", 965 | " grass.run_command('r.null', map='classif', setnull=0)\n", 966 | " \n", 967 | " #creation d'un raster dont les valeurs des pixels valent la classe identifiée manuellement (Ground truth) - VALIDATION\n", 968 | " grass.run_command('v.to.rast' , overwrite=True, input='VALIDATION_ILOTS_URBAINS_POINT@PERMANENT', output='validation', use='attr', column='class', quiet=True)\n", 969 | " \n", 970 | " #remplace les pixels de valeur zero par des nulls\n", 971 | " grass.run_command('g.remove', type='vector', name='VALIDATION_ILOTS_URBAINS_CENTRO,VALIDATION_ILOTS_URBAINS_POINT', flags='f')\n", 972 | " grass.run_command('r.null', map='validation', setnull=0)\n", 973 | " \n", 974 | " \n", 975 | " if not os.path.exists(gisdb+'\\\\classification_metriques\\\\validation'):\n", 976 | " os.makedirs(gisdb+'\\\\classification_metriques\\\\validation') \n", 977 | " grass.run_command('r.kappa', classification='classif@PERMANENT', output=lienFichierSortie, overwrite= True, reference='validation@PERMANENT')\n" 978 | ] 979 | }, 980 | { 981 | "cell_type": "markdown", 982 | "metadata": {}, 983 | "source": [ 984 | "## Définition des paramètres" 985 | ] 986 | }, 987 | { 988 | "cell_type": "code", 989 | "execution_count": 36, 990 | "metadata": {}, 991 | "outputs": [], 992 | "source": [ 993 | "#couche vectorielle correspondant aux ilots urbains\n", 994 | "lienVersShapefileIlots=\"F:\\\\stagiaire\\\\donnees\\\\ilot-urb-31370.shp\"\n", 995 | "#couche vectorielle correspondant à l'occupation du sol\n", 996 | "lienVersShapefileLC=\"F:\\\\stagiaire\\\\donnees\\\\LC_LIEGE_31370_recod.shp\"\n", 997 | "colonneLC='lc_code'\n", 998 | "resolution='0.25'\n", 999 | "#Si la classification se fait sur une zone des couches importées , il faudrait définir la variable suivante:\n", 1000 | "#polygone délimitant la zone d'étude\n", 1001 | "LienVersPolygoneZE=\"F:\\\\stagiaire\\\\donnees\\\\ZE_LC_large.shp\"\n", 1002 | "\n", 1003 | "\n", 1004 | "#entrainement\n", 1005 | "#Chemin de la couche vectorielle correspondant aux ilots de la vérité terrain\n", 1006 | "lienVersShpEV='F:\\\\stagiaire\\\\entrainement_test\\\\ilots_entrainement_4.shp' \n", 1007 | "#Nom de la colonne attributaire de la couche vectorielle correspondant aux ilots urbains qui répertorie les id d'ilots\n", 1008 | "nomColonneIdIlot='ID_ILOT'\n", 1009 | "#Nom de la colonne attributaire de la couche vectorielle correspondant aux ilots de la vérité terrainr qui répertorie les classes de land use\n", 1010 | "nomColonneCLU='class'\n", 1011 | "\n", 1012 | "#Les valeurs de ces variables dépendent de si une zone d'étude a été extraite ou pas\n", 1013 | "#si oui, ces variables devront valoir :\n", 1014 | "nomCoucheEtudeIlots='ZE_ILOTS_URBAINS' \n", 1015 | "vecteurRegion='ZE'\n", 1016 | "nomCoucheEtudeLC='ZE_LAND_COVER'\n", 1017 | "\n", 1018 | "#Sinon, il faudra comenter les trois lignes ci dessus et décommenter les trois lignes ci-dessous :\n", 1019 | "#nomCoucheEtudeIlots='ILOTS_URBAINS' \n", 1020 | "#VecteurRegion='ILOTS_URBAINS'\n", 1021 | "#nomCoucheEtudeLC='LAND_COVER'\n", 1022 | "\n", 1023 | "\n" 1024 | ] 1025 | }, 1026 | { 1027 | "cell_type": "markdown", 1028 | "metadata": {}, 1029 | "source": [ 1030 | "## Import des données brutes \n" 1031 | ] 1032 | }, 1033 | { 1034 | "cell_type": "markdown", 1035 | "metadata": {}, 1036 | "source": [ 1037 | "*Land cover - Ilots urbains " 1038 | ] 1039 | }, 1040 | { 1041 | "cell_type": "markdown", 1042 | "metadata": {}, 1043 | "source": [ 1044 | "Import des ilots urbains" 1045 | ] 1046 | }, 1047 | { 1048 | "cell_type": "code", 1049 | "execution_count": 60, 1050 | "metadata": {}, 1051 | "outputs": [ 1052 | { 1053 | "name": "stdout", 1054 | "output_type": "stream", 1055 | "text": [ 1056 | "Import des ilots urbains Mon Aug 08 09:45:16 2016\n", 1057 | "L'import des données a pris : 0 heures, 0 minutes et 4.7 secondes\n" 1058 | ] 1059 | } 1060 | ], 1061 | "source": [ 1062 | "print ('Import des ilots urbains '+ time.ctime())\n", 1063 | "tempbegintime=time.time()\n", 1064 | "\n", 1065 | "\n", 1066 | "## Import des ilots urbains\n", 1067 | "importVectorGrass(lienVersShapefileIlots, \"ILOTS_URBAINS\")\n", 1068 | "\n", 1069 | "\n", 1070 | "\n", 1071 | "## Compute processing time and print it\n", 1072 | "print calculTemps(tempbegintime,\"L'import des données a pris : \")" 1073 | ] 1074 | }, 1075 | { 1076 | "cell_type": "markdown", 1077 | "metadata": {}, 1078 | "source": [ 1079 | "Import du land cover\n" 1080 | ] 1081 | }, 1082 | { 1083 | "cell_type": "code", 1084 | "execution_count": 115, 1085 | "metadata": {}, 1086 | "outputs": [ 1087 | { 1088 | "name": "stdout", 1089 | "output_type": "stream", 1090 | "text": [ 1091 | "Import du Land cover le Mon Aug 08 16:16:25 2016\n", 1092 | "L'import du land cover a pris : 0 heures, 14 minutes et 34.6 secondes\n" 1093 | ] 1094 | } 1095 | ], 1096 | "source": [ 1097 | "print ('Import du Land cover le '+ time.ctime())\n", 1098 | "tempbegintime=time.time()\n", 1099 | "\n", 1100 | "## Import du land cover\n", 1101 | "importVectorGrass(lienVersShapefileLC, \"LAND_COVER\")\n", 1102 | "\n", 1103 | "## Compute processing time and print it\n", 1104 | "print calculTemps(tempbegintime,\"L'import du land cover a pris : \")" 1105 | ] 1106 | }, 1107 | { 1108 | "cell_type": "markdown", 1109 | "metadata": {}, 1110 | "source": [ 1111 | "## Extraction de la zone d'étude à partir des couches importées\n" 1112 | ] 1113 | }, 1114 | { 1115 | "cell_type": "markdown", 1116 | "metadata": {}, 1117 | "source": [ 1118 | "Extraction de la zone d'étude à partir de la couche des ilots urbains.
\n", 1119 | "Cette dernière ne devrait pas contenir de polygones multi-parties car sinon, ils seront divisées et auront le même id , ce qui engendrera des doublons d'id. Il faudra alors les supprimer manuellement dans le fichier csv crée par la fonction creationCsvMetriques(). (Ceci peut se faire en ouvrant le fichier csv à l'aide d'excel puis en selectionnant la colonne correspondant aux ids des ilots , ensuite aller dans le menu \"données\" et cliquer sur \"supprimer les doublons\")" 1120 | ] 1121 | }, 1122 | { 1123 | "cell_type": "code", 1124 | "execution_count": 108, 1125 | "metadata": { 1126 | "scrolled": true 1127 | }, 1128 | "outputs": [ 1129 | { 1130 | "name": "stdout", 1131 | "output_type": "stream", 1132 | "text": [ 1133 | "Extraction de la zone d'étude à partir de la couche des ilots urbains le : Mon Aug 08 15:28:15 2016\n" 1134 | ] 1135 | }, 1136 | { 1137 | "data": { 1138 | "text/plain": [ 1139 | "\"L'extraction de la zone d'\\xe9tude \\xe0 partir de la couche des ilots urbains a pris :0 heures, 0 minutes et 12.0 secondes\"" 1140 | ] 1141 | }, 1142 | "execution_count": 108, 1143 | "metadata": {}, 1144 | "output_type": "execute_result" 1145 | } 1146 | ], 1147 | "source": [ 1148 | "tempbegintime=time.time()\n", 1149 | "print (\"Extraction de la zone d'étude à partir de la couche des ilots urbains le : \" + time.ctime())\n", 1150 | "\n", 1151 | "extractIlots(LienVersPolygoneZE, 'ILOTS_URBAINS')\n", 1152 | "\n", 1153 | "# Calcul du temps de traitement\n", 1154 | "calculTemps(tempbegintime, \"L'extraction de la zone d'étude à partir de la couche des ilots urbains a pris :\")" 1155 | ] 1156 | }, 1157 | { 1158 | "cell_type": "code", 1159 | "execution_count": 109, 1160 | "metadata": { 1161 | "collapsed": true 1162 | }, 1163 | "outputs": [], 1164 | "source": [ 1165 | "\n", 1166 | "nettoieDonnees (nomCoucheEtudeIlots )" 1167 | ] 1168 | }, 1169 | { 1170 | "cell_type": "markdown", 1171 | "metadata": {}, 1172 | "source": [ 1173 | "![illustration : ](images\\extraction_zone_etude_ilots_urbains.png )" 1174 | ] 1175 | }, 1176 | { 1177 | "cell_type": "markdown", 1178 | "metadata": {}, 1179 | "source": [ 1180 | "Extraction de la zone d'étude à partir de la couche d'occupation du sol. " 1181 | ] 1182 | }, 1183 | { 1184 | "cell_type": "code", 1185 | "execution_count": 28, 1186 | "metadata": {}, 1187 | "outputs": [ 1188 | { 1189 | "name": "stdout", 1190 | "output_type": "stream", 1191 | "text": [ 1192 | "Extraction de la zone d'étude à partir de la couche de land cover le : Tue Aug 09 13:25:27 2016\n" 1193 | ] 1194 | }, 1195 | { 1196 | "data": { 1197 | "text/plain": [ 1198 | "\"L'extraction de la zone d'\\xc3\\xa9tude \\xc3\\xa0 partir de la couche de land cover a pris :0 heures, 8 minutes et 23.2 secondes\"" 1199 | ] 1200 | }, 1201 | "execution_count": 28, 1202 | "metadata": {}, 1203 | "output_type": "execute_result" 1204 | } 1205 | ], 1206 | "source": [ 1207 | "tempbegintime=time.time()\n", 1208 | "print (\"Extraction de la zone d'étude à partir de la couche de land cover le : \" + time.ctime())\n", 1209 | "\n", 1210 | "extractLandCover()\n", 1211 | "\n", 1212 | "# Calcul du temps de traitement\n", 1213 | "calculTemps(tempbegintime, \"L'extraction de la zone d'étude à partir de la couche de land cover a pris :\")" 1214 | ] 1215 | }, 1216 | { 1217 | "cell_type": "markdown", 1218 | "metadata": {}, 1219 | "source": [ 1220 | "![illustration : ](images\\extraction_zone_etude_land_cover.png )" 1221 | ] 1222 | }, 1223 | { 1224 | "cell_type": "code", 1225 | "execution_count": 37, 1226 | "metadata": {}, 1227 | "outputs": [ 1228 | { 1229 | "name": "stdout", 1230 | "output_type": "stream", 1231 | "text": [ 1232 | "['strate_herba', 'bat_haut', 'strate_arbu', 'strate_arbo', 'bat_bas_petit', 'surf_artif_sol_nu', 'bat_bas_large', 'res_ferrov', 'eau', 'res_rout']\n" 1233 | ] 1234 | } 1235 | ], 1236 | "source": [ 1237 | "list_class=listeClasseLC(nomCoucheEtudeLC,colonneLC)\n", 1238 | "print list_class" 1239 | ] 1240 | }, 1241 | { 1242 | "cell_type": "code", 1243 | "execution_count": 29, 1244 | "metadata": { 1245 | "collapsed": true 1246 | }, 1247 | "outputs": [], 1248 | "source": [ 1249 | "\n", 1250 | "liste_metriques=['edgedensity','mps','padrange','padsd','padcv','patchdensity','patchnum','dominance','shannon','simpson','richness','shape','pielou','renyi']" 1251 | ] 1252 | }, 1253 | { 1254 | "cell_type": "code", 1255 | "execution_count": 61, 1256 | "metadata": {}, 1257 | "outputs": [], 1258 | "source": [ 1259 | "extractRasterAll()" 1260 | ] 1261 | }, 1262 | { 1263 | "cell_type": "markdown", 1264 | "metadata": {}, 1265 | "source": [ 1266 | "Il faudrait créer le fichier de configuration manuellement avant de faire tourner calculMetriqueAll()" 1267 | ] 1268 | }, 1269 | { 1270 | "cell_type": "code", 1271 | "execution_count": null, 1272 | "metadata": {}, 1273 | "outputs": [ 1274 | { 1275 | "name": "stdout", 1276 | "output_type": "stream", 1277 | "text": [ 1278 | "Calcul des métriques sur le raster Fri Aug 05 11:18:30 2016\n" 1279 | ] 1280 | } 1281 | ], 1282 | "source": [ 1283 | "print ('Calcul des métriques sur le raster '+ time.ctime())\n", 1284 | "tempbegintime=time.time()\n", 1285 | "\n", 1286 | "calculMetriquesAll( liste_metriques)\n", 1287 | "\n", 1288 | "## Compute processing time and print it\n", 1289 | "print calculTemps(tempbegintime,\"Le calcul des métriques sur le raster a pris: \")" 1290 | ] 1291 | }, 1292 | { 1293 | "cell_type": "markdown", 1294 | "metadata": { 1295 | "collapsed": true 1296 | }, 1297 | "source": [ 1298 | "## Extraction d'un raster par classe" 1299 | ] 1300 | }, 1301 | { 1302 | "cell_type": "code", 1303 | "execution_count": 62, 1304 | "metadata": {}, 1305 | "outputs": [], 1306 | "source": [ 1307 | "extractRaster(list_class)\n" 1308 | ] 1309 | }, 1310 | { 1311 | "cell_type": "markdown", 1312 | "metadata": {}, 1313 | "source": [ 1314 | "![raster correspondant à la classe de land cover : ](images\\raster-1-null.png \"Raster 1-null\")" 1315 | ] 1316 | }, 1317 | { 1318 | "cell_type": "markdown", 1319 | "metadata": {}, 1320 | "source": [ 1321 | "## Calcul des métriques" 1322 | ] 1323 | }, 1324 | { 1325 | "cell_type": "markdown", 1326 | "metadata": {}, 1327 | "source": [ 1328 | "Les fichiers de configuration doivent être crées à la main sur GRASS pour chaque raster de land cover" 1329 | ] 1330 | }, 1331 | { 1332 | "cell_type": "markdown", 1333 | "metadata": {}, 1334 | "source": [ 1335 | "L'utilisateur devra créer manuellement sur grass les fichiers de configuration passés en paramètre à [r.li.*](https://grass.osgeo.org/grass64/manuals/r.li.html). Et ce, pour chaque raster de land cover. \n", 1336 | "Ces fichiers de configuration sont crées à l'aide de la commande [g.gui.rlisetup](https://grass.osgeo.org/grass72/manuals/g.gui.rlisetup.html) (sous Grass 7.2). Les fichiers de configuration devront être nommés cf_[nomClasseDansList_class] (ex : cf_bat_bas_large) et devront être crés avant de lancer le calcul des métriques." 1337 | ] 1338 | }, 1339 | { 1340 | "cell_type": "markdown", 1341 | "metadata": {}, 1342 | "source": [ 1343 | "![illustration :](images\\configfile.png)\n", 1344 | "\n", 1345 | "![illustration :](images\\configfile2.png)" 1346 | ] 1347 | }, 1348 | { 1349 | "cell_type": "code", 1350 | "execution_count": 34, 1351 | "metadata": { 1352 | "collapsed": true 1353 | }, 1354 | "outputs": [], 1355 | "source": [ 1356 | "liste_metriques_par_classe=['edgedensity','mps','padrange','padsd','padcv','patchdensity','patchnum','shape']" 1357 | ] 1358 | }, 1359 | { 1360 | "cell_type": "code", 1361 | "execution_count": 35, 1362 | "metadata": { 1363 | "scrolled": true 1364 | }, 1365 | "outputs": [ 1366 | { 1367 | "name": "stdout", 1368 | "output_type": "stream", 1369 | "text": [ 1370 | "Calcul des métriques le Sat Aug 06 11:57:31 2016\n", 1371 | "1\n", 1372 | "1\n", 1373 | "1\n", 1374 | "1\n", 1375 | "1\n", 1376 | "1\n", 1377 | "1\n", 1378 | "1\n", 1379 | "1\n", 1380 | "1\n" 1381 | ] 1382 | }, 1383 | { 1384 | "data": { 1385 | "text/plain": [ 1386 | "'Le calcul des m\\xc3\\xa9triques a pris : 5 heures, 56 minutes et 51.6 secondes'" 1387 | ] 1388 | }, 1389 | "execution_count": 35, 1390 | "metadata": {}, 1391 | "output_type": "execute_result" 1392 | } 1393 | ], 1394 | "source": [ 1395 | "print ('Calcul des métriques le '+ time.ctime())\n", 1396 | "tempbegintime=time.time()\n", 1397 | "\n", 1398 | "\n", 1399 | "calculMetriques( list_class ,liste_metriques_par_classe )\n", 1400 | "\n", 1401 | " \n", 1402 | " \n", 1403 | "## Compute processing time and print it\n", 1404 | "calculTemps(tempbegintime, \"Le calcul des métriques a pris : \")" 1405 | ] 1406 | }, 1407 | { 1408 | "cell_type": "code", 1409 | "execution_count": 63, 1410 | "metadata": { 1411 | "scrolled": false 1412 | }, 1413 | "outputs": [], 1414 | "source": [ 1415 | "\n", 1416 | "CreationCsvMetriques()\n" 1417 | ] 1418 | }, 1419 | { 1420 | "cell_type": "code", 1421 | "execution_count": 64, 1422 | "metadata": { 1423 | "scrolled": true 1424 | }, 1425 | "outputs": [ 1426 | { 1427 | "name": "stdout", 1428 | "output_type": "stream", 1429 | "text": [ 1430 | "<_csv.reader object at 0x0000000008DD5B88>\n" 1431 | ] 1432 | } 1433 | ], 1434 | "source": [ 1435 | "\n", 1436 | "entrValidClasser(lienVersShpEV, nomColonneCLU, nomColonneIdIlot)" 1437 | ] 1438 | }, 1439 | { 1440 | "cell_type": "markdown", 1441 | "metadata": {}, 1442 | "source": [ 1443 | "## Définition des variables d'environnement de R" 1444 | ] 1445 | }, 1446 | { 1447 | "cell_type": "markdown", 1448 | "metadata": {}, 1449 | "source": [ 1450 | "**Ne pas oublier de bien définir .libPaths('C:\\\\R_LIBS_USER\\\\win-library\\\\3.3') directement dans R**" 1451 | ] 1452 | }, 1453 | { 1454 | "cell_type": "code", 1455 | "execution_count": 65, 1456 | "metadata": { 1457 | "collapsed": true 1458 | }, 1459 | "outputs": [], 1460 | "source": [ 1461 | "os.environ['PATH'] = 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\bin\\\\x64' + os.pathsep + os.environ['PATH']" 1462 | ] 1463 | }, 1464 | { 1465 | "cell_type": "code", 1466 | "execution_count": 66, 1467 | "metadata": { 1468 | "collapsed": true 1469 | }, 1470 | "outputs": [], 1471 | "source": [ 1472 | "## Set R stat environment variables\n", 1473 | "os.environ['R_HOME'] = 'C:\\\\Program Files\\\\R\\\\R-2.15.2'\n", 1474 | "os.environ['R_ENVIRON'] = 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\etc\\\\x64'\n", 1475 | "os.environ['R_DOC_DIR'] = 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\doc'\n", 1476 | "os.environ['R_LIBS'] = 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\library'\n", 1477 | "os.environ['R_LIBS_USER'] = 'C:\\\\Program Files\\\\RStudio\\\\R\\\\library'" 1478 | ] 1479 | }, 1480 | { 1481 | "cell_type": "code", 1482 | "execution_count": 67, 1483 | "metadata": { 1484 | "scrolled": true 1485 | }, 1486 | "outputs": [ 1487 | { 1488 | "data": { 1489 | "text/plain": [ 1490 | "{'AGSDESKTOPJAVA': 'C:\\\\Program Files (x86)\\\\ArcGIS\\\\Desktop10.3\\\\',\n", 1491 | " 'ALLUSERSPROFILE': 'C:\\\\ProgramData',\n", 1492 | " 'APPDATA': 'C:\\\\Users\\\\Stagiaire\\\\AppData\\\\Roaming',\n", 1493 | " 'ASL.LOG': 'Destination=file',\n", 1494 | " 'CLICOLOR': '1',\n", 1495 | " 'COMMONPROGRAMFILES': 'C:\\\\Program Files\\\\Common Files',\n", 1496 | " 'COMMONPROGRAMFILES(X86)': 'C:\\\\Program Files (x86)\\\\Common Files',\n", 1497 | " 'COMMONPROGRAMW6432': 'C:\\\\Program Files\\\\Common Files',\n", 1498 | " 'COMPUTERNAME': 'GRAZEO-1',\n", 1499 | " 'COMSPEC': 'C:\\\\Windows\\\\system32\\\\cmd.exe',\n", 1500 | " 'FP_NO_HOST_CHECK': 'NO',\n", 1501 | " 'GISBASE': 'C:/Program Files/GRASS GIS 7.2.svn',\n", 1502 | " 'GISRC': 'c:\\\\users\\\\stagia~1\\\\appdata\\\\local\\\\temp\\\\tmpslmuy5',\n", 1503 | " 'GIS_LOCK': '10308',\n", 1504 | " 'GIT_PAGER': 'cat',\n", 1505 | " 'GPU_MAX_ALLOC_PERCENT': '75',\n", 1506 | " 'GRASS_ADDON_BASE': 'C:\\\\Users\\\\Stagiaire\\\\AppData\\\\Roaming\\\\GRASS7\\\\addons',\n", 1507 | " 'GRASS_PYTHON': 'python.exe',\n", 1508 | " 'HOMEDRIVE': 'C:',\n", 1509 | " 'HOMEPATH': '\\\\Users\\\\Stagiaire',\n", 1510 | " 'IPY_INTERRUPT_EVENT': '864',\n", 1511 | " 'JPY_INTERRUPT_EVENT': '864',\n", 1512 | " 'JPY_PARENT_PID': '1000',\n", 1513 | " 'LOCALAPPDATA': 'C:\\\\Users\\\\Stagiaire\\\\AppData\\\\Local',\n", 1514 | " 'LOGONSERVER': '\\\\\\\\GRAZEO-1',\n", 1515 | " 'NUMBER_OF_PROCESSORS': '8',\n", 1516 | " 'OS': 'Windows_NT',\n", 1517 | " 'PAGER': 'cat',\n", 1518 | " 'PATH': 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\bin\\\\x64;C:\\\\Program Files\\\\Anaconda2\\\\lib\\\\site-packages;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\Python27;C:\\\\Users\\\\Stagiaire\\\\AppData\\\\Roaming\\\\GRASS7\\\\addons\\\\scripts;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\etc;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\etc\\\\python;C:\\\\Python27;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\lib;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\bin;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\extrabin;C:\\\\Program Files\\\\Anaconda2\\\\lib\\\\site-packages;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\Python27;C:\\\\Users\\\\Stagiaire\\\\AppData\\\\Roaming\\\\GRASS7\\\\addons\\\\scripts;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\etc;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\etc\\\\python;C:\\\\Python27;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\lib;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\bin;C:\\\\Program Files\\\\GRASS GIS 7.2.svn\\\\extrabin;F:\\\\ANACONDA\\\\Library\\\\bin;F:\\\\ANACONDA\\\\Library\\\\bin;F:\\\\ANACONDA\\\\Library\\\\bin;C:\\\\Python27;C:\\\\PCI Geomatics\\\\Geomatica 2015\\\\exe;C:\\\\ProgramData\\\\Oracle\\\\Java\\\\javapath;C:\\\\Program Files (x86)\\\\Intel\\\\iCLS Client\\\\;C:\\\\Program Files\\\\Intel\\\\iCLS Client\\\\;C:\\\\Windows\\\\system32;C:\\\\Windows;C:\\\\Windows\\\\System32\\\\Wbem;C:\\\\Windows\\\\System32\\\\WindowsPowerShell\\\\v1.0\\\\;C:\\\\Program Files (x86)\\\\Intel\\\\OpenCL SDK\\\\2.0\\\\bin\\\\x86;C:\\\\Program Files (x86)\\\\Intel\\\\OpenCL SDK\\\\2.0\\\\bin\\\\x64;C:\\\\Program Files\\\\Intel\\\\Intel(R) Management Engine Components\\\\DAL;C:\\\\Program Files\\\\Intel\\\\Intel(R) Management Engine Components\\\\IPT;C:\\\\Program Files (x86)\\\\Intel\\\\Intel(R) Management Engine Components\\\\DAL;C:\\\\Program Files (x86)\\\\Intel\\\\Intel(R) Management Engine Components\\\\IPT;C:\\\\Program Files\\\\MiKTeX 2.9\\\\miktex\\\\bin\\\\x64\\\\;C:\\\\Program Files\\\\R\\\\R-2.14.2\\\\bin;C:\\\\Python26\\\\ArcGIS10.0;C:\\\\OSGeo4W\\\\bin;E:\\\\Software\\\\lastools\\\\bin;C:\\\\Program Files (x86)\\\\Druide\\\\Antidote 7\\\\Programmes32;C:\\\\Program Files (x86)\\\\Druide\\\\Antidote 7\\\\Programmes64;C:\\\\Program Files (x86)\\\\QuickTime\\\\QTSystem\\\\;F:\\\\ANACONDA;F:\\\\ANACONDA\\\\Scripts;F:\\\\ANACONDA\\\\Library\\\\bin;C:\\\\Users\\\\Stagiaire\\\\AppData\\\\Local\\\\Pandoc\\\\;C:/Program Files/GRASS GIS 7.2.svn\\\\bin;C:/Program Files/GRASS GIS 7.2.svn\\\\scripts;C:/Program Files/GRASS GIS 7.2.svn\\\\extrabin;C:/Program Files/GRASS GIS 7.2.svn\\\\lib;C:/Program Files/GRASS GIS 7.2.svn\\\\bin;C:/Program Files/GRASS GIS 7.2.svn\\\\scripts;C:/Program Files/GRASS GIS 7.2.svn\\\\extrabin;C:/Program Files/GRASS GIS 7.2.svn\\\\lib',\n", 1519 | " 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC',\n", 1520 | " 'PROCESSOR_ARCHITECTURE': 'AMD64',\n", 1521 | " 'PROCESSOR_IDENTIFIER': 'Intel64 Family 6 Model 58 Stepping 9, GenuineIntel',\n", 1522 | " 'PROCESSOR_LEVEL': '6',\n", 1523 | " 'PROCESSOR_REVISION': '3a09',\n", 1524 | " 'PROGRAMDATA': 'C:\\\\ProgramData',\n", 1525 | " 'PROGRAMFILES': 'C:\\\\Program Files',\n", 1526 | " 'PROGRAMFILES(X86)': 'C:\\\\Program Files (x86)',\n", 1527 | " 'PROGRAMW6432': 'C:\\\\Program Files',\n", 1528 | " 'PROMPT': '$P$G',\n", 1529 | " 'PSMODULEPATH': 'C:\\\\Windows\\\\system32\\\\WindowsPowerShell\\\\v1.0\\\\Modules\\\\',\n", 1530 | " 'PUBLIC': 'C:\\\\Users\\\\Public',\n", 1531 | " 'PYTHONLIB': 'C:\\\\Python27',\n", 1532 | " 'PYTHONPATH': 'C:/Program Files/GRASS GIS 7.2.svn\\\\etc\\\\python;C:/Program Files/GRASS GIS 7.2.svn/etc/python',\n", 1533 | " 'R_DOC_DIR': 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\doc',\n", 1534 | " 'R_ENVIRON': 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\etc\\\\x64',\n", 1535 | " 'R_HOME': 'C:\\\\Program Files\\\\R\\\\R-2.15.2',\n", 1536 | " 'R_LIBS': 'C:\\\\Program Files\\\\R\\\\R-2.15.2\\\\library',\n", 1537 | " 'R_LIBS_USER': 'C:\\\\Program Files\\\\RStudio\\\\R\\\\library',\n", 1538 | " 'SESSIONNAME': 'Console',\n", 1539 | " 'SYSTEMDRIVE': 'C:',\n", 1540 | " 'SYSTEMROOT': 'C:\\\\Windows',\n", 1541 | " 'TEMP': 'C:\\\\Users\\\\STAGIA~1\\\\AppData\\\\Local\\\\Temp',\n", 1542 | " 'TERM': 'xterm-color',\n", 1543 | " 'TERRAGO_COMMON_DIR': 'C:\\\\Program Files (x86)\\\\Common Files\\\\TerraGo',\n", 1544 | " 'TMP': 'C:\\\\Users\\\\STAGIA~1\\\\AppData\\\\Local\\\\Temp',\n", 1545 | " 'USERDOMAIN': 'GRAZEO-1',\n", 1546 | " 'USERNAME': 'Stagiaire',\n", 1547 | " 'USERPROFILE': 'C:\\\\Users\\\\Stagiaire',\n", 1548 | " 'WINDIR': 'C:\\\\Windows'}" 1549 | ] 1550 | }, 1551 | "execution_count": 67, 1552 | "metadata": {}, 1553 | "output_type": "execute_result" 1554 | } 1555 | ], 1556 | "source": [ 1557 | "env" 1558 | ] 1559 | }, 1560 | { 1561 | "cell_type": "code", 1562 | "execution_count": 99, 1563 | "metadata": { 1564 | "collapsed": true 1565 | }, 1566 | "outputs": [], 1567 | "source": [ 1568 | "classifier()" 1569 | ] 1570 | }, 1571 | { 1572 | "cell_type": "markdown", 1573 | "metadata": {}, 1574 | "source": [ 1575 | "## VALIDATION" 1576 | ] 1577 | }, 1578 | { 1579 | "cell_type": "code", 1580 | "execution_count": 94, 1581 | "metadata": { 1582 | "scrolled": true 1583 | }, 1584 | "outputs": [ 1585 | { 1586 | "name": "stdout", 1587 | "output_type": "stream", 1588 | "text": [ 1589 | "Validation : Mon Aug 08 10:31:03 2016\n", 1590 | "La validation a pris : 0 heures, 5 minutes et 17.0 secondes\n" 1591 | ] 1592 | } 1593 | ], 1594 | "source": [ 1595 | "print ('Validation : '+ time.ctime())\n", 1596 | "tempbegintime=time.time()\n", 1597 | "\n", 1598 | "valider ()\n", 1599 | "\n", 1600 | "\n", 1601 | "print calculTemps(tempbegintime,\"La validation a pris : \")" 1602 | ] 1603 | } 1604 | ], 1605 | "metadata": { 1606 | "kernelspec": { 1607 | "display_name": "Python 2", 1608 | "language": "python", 1609 | "name": "python2" 1610 | }, 1611 | "language_info": { 1612 | "codemirror_mode": { 1613 | "name": "ipython", 1614 | "version": 2 1615 | }, 1616 | "file_extension": ".py", 1617 | "mimetype": "text/x-python", 1618 | "name": "python", 1619 | "nbconvert_exporter": "python", 1620 | "pygments_lexer": "ipython2", 1621 | "version": "2.7.12" 1622 | } 1623 | }, 1624 | "nbformat": 4, 1625 | "nbformat_minor": 1 1626 | } 1627 | -------------------------------------------------------------------------------- /.ipynb_checkpoints/rli_configfile_creation-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

Define working environment

" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "The following cells are used to: \n", 15 | "- Import needed libraries\n", 16 | "- Set the environment variables for Python, Anaconda, GRASS GIS and R statistical computing \n", 17 | "- Define the [\"GRASSDATA\" folder](https://grass.osgeo.org/grass73/manuals/helptext.html), the name of \"location\" and \"mapset\" where you will to work." 18 | ] 19 | }, 20 | { 21 | "cell_type": "markdown", 22 | "metadata": {}, 23 | "source": [ 24 | "**Import libraries**" 25 | ] 26 | }, 27 | { 28 | "cell_type": "raw", 29 | "metadata": { 30 | "collapsed": true 31 | }, 32 | "source": [ 33 | "## Import libraries needed for setting parameters of operating system \n", 34 | "import os\n", 35 | "import sys\n", 36 | "\n", 37 | "## Import library for temporary files creation \n", 38 | "import tempfile \n", 39 | "\n", 40 | "## Import Pandas library\n", 41 | "import pandas as pd\n", 42 | "\n", 43 | "## Import Numpy library\n", 44 | "import numpy\n", 45 | "\n", 46 | "## Import Psycopg2 library (interection with postgres database)\n", 47 | "import psycopg2 as pg\n", 48 | "\n", 49 | "# Import Math library (usefull for rounding number, e.g.)\n", 50 | "import math\n", 51 | "\n", 52 | "## Import Subprocess + subprocess.call\n", 53 | "import subprocess\n", 54 | "from subprocess import call, Popen, PIPE, STDOUT" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 1, 60 | "metadata": { 61 | "collapsed": true 62 | }, 63 | "outputs": [], 64 | "source": [ 65 | "## Import libraries needed for setting parameters of operating system \n", 66 | "import os\n", 67 | "import sys" 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": {}, 73 | "source": [ 74 | "

Environment variables when working on Linux Mint

" 75 | ] 76 | }, 77 | { 78 | "cell_type": "markdown", 79 | "metadata": {}, 80 | "source": [ 81 | "**Set 'Python' and 'GRASS GIS' environment variables**" 82 | ] 83 | }, 84 | { 85 | "cell_type": "markdown", 86 | "metadata": {}, 87 | "source": [ 88 | "Here, we set [the environment variables allowing to use of GRASS GIS](https://grass.osgeo.org/grass64/manuals/variables.html) inside this Jupyter notebook. Please change the directory path according to your own system configuration." 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": 2, 94 | "metadata": { 95 | "collapsed": true 96 | }, 97 | "outputs": [], 98 | "source": [ 99 | "### Define GRASS GIS environment variables for LINUX UBUNTU Mint 18.1 (Serena)\n", 100 | "# Check is environmental variables exists and create them (empty) if not exists.\n", 101 | "if not 'PYTHONPATH' in os.environ:\n", 102 | " os.environ['PYTHONPATH']=''\n", 103 | "if not 'LD_LIBRARY_PATH' in os.environ:\n", 104 | " os.environ['LD_LIBRARY_PATH']=''\n", 105 | "# Set environmental variables\n", 106 | "os.environ['GISBASE'] = '/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu'\n", 107 | "os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'bin')\n", 108 | "os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'script')\n", 109 | "os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'lib')\n", 110 | "#os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python')\n", 111 | "os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python')\n", 112 | "os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python','grass')\n", 113 | "os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python','grass','script')\n", 114 | "os.environ['PYTHONLIB'] = '/usr/lib/python2.7'\n", 115 | "os.environ['LD_LIBRARY_PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'lib')\n", 116 | "os.environ['GIS_LOCK'] = '$$'\n", 117 | "os.environ['GISRC'] = os.path.join(os.environ['HOME'],'.grass7','rc')\n", 118 | "os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons')\n", 119 | "os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons','bin')\n", 120 | "os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons')\n", 121 | "os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons','scripts')\n", 122 | "\n", 123 | "## Define GRASS-Python environment\n", 124 | "sys.path.append(os.path.join(os.environ['GISBASE'],'etc','python'))" 125 | ] 126 | }, 127 | { 128 | "cell_type": "markdown", 129 | "metadata": { 130 | "collapsed": true 131 | }, 132 | "source": [ 133 | "**Import GRASS Python packages**" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": 143, 139 | "metadata": { 140 | "collapsed": true 141 | }, 142 | "outputs": [], 143 | "source": [ 144 | "## Import libraries needed to launch GRASS GIS in the jupyter notebook\n", 145 | "import grass.script.setup as gsetup\n", 146 | "\n", 147 | "## Import libraries needed to call GRASS using Python\n", 148 | "import grass.script as gscript\n", 149 | "from grass.script import core as grass" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": { 162 | "collapsed": true 163 | }, 164 | "source": [ 165 | "**Display current environment variables of your computer**" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": 4, 171 | "metadata": { 172 | "scrolled": true 173 | }, 174 | "outputs": [ 175 | { 176 | "name": "stdout", 177 | "output_type": "stream", 178 | "text": [ 179 | "MDMSESSION = mate \t\n", 180 | "MANDATORY_PATH = /usr/share/gconf/mate.mandatory.path \t\n", 181 | "MATE_DESKTOP_SESSION_ID = this-is-deprecated \t\n", 182 | "LESSOPEN = | /usr/bin/lesspipe %s \t\n", 183 | "MDM_LANG = fr_BE.UTF-8 \t\n", 184 | "LOGNAME = tais \t\n", 185 | "USER = tais \t\n", 186 | "HOME = /home/tais \t\n", 187 | "XDG_VTNR = 8 \t\n", 188 | "PATH = /usr/local/bin:/home/tais/BIN:/home/tais/bin:/home/tais/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu/bin:/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu/script:/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu/lib:/home/tais/.grass7/addons:/home/tais/.grass7/addons/bin:/home/tais/.grass7/addons:/home/tais/.grass7/addons/scripts \t\n", 189 | "CLICOLOR = 1 \t\n", 190 | "DISPLAY = :0.0 \t\n", 191 | "SSH_AGENT_PID = 2066 \t\n", 192 | "LANG = fr_BE.UTF-8 \t\n", 193 | "TERM = xterm-color \t\n", 194 | "SHELL = /bin/bash \t\n", 195 | "GIS_LOCK = $$ \t\n", 196 | "XAUTHORITY = /home/tais/.Xauthority \t\n", 197 | "SESSION_MANAGER = local/tais-HP-Z620-Workstation:@/tmp/.ICE-unix/1995,unix/tais-HP-Z620-Workstation:/tmp/.ICE-unix/1995 \t\n", 198 | "SHLVL = 1 \t\n", 199 | "QT_LINUX_ACCESSIBILITY_ALWAYS_ON = 1 \t\n", 200 | "INSIDE_CAJA_PYTHON = \t\n", 201 | "QT_ACCESSIBILITY = 1 \t\n", 202 | "LD_LIBRARY_PATH = :/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu/lib \t\n", 203 | "COMPIZ_CONFIG_PROFILE = mate \t\n", 204 | "WINDOWPATH = 8 \t\n", 205 | "GTK_OVERLAY_SCROLLING = 0 \t\n", 206 | "PYTHONPATH = :/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu/etc/python:/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu/etc/python/grass:/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu/etc/python/grass/script \t\n", 207 | "GISBASE = /home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu \t\n", 208 | "CLUTTER_BACKEND = x11 \t\n", 209 | "USERNAME = tais \t\n", 210 | "XDG_SESSION_DESKTOP = mate \t\n", 211 | "GDM_XSERVER_LOCATION = local \t\n", 212 | "XDG_RUNTIME_DIR = /run/user/1000 \t\n", 213 | "JPY_PARENT_PID = 25828 \t\n", 214 | "QT_STYLE_OVERRIDE = gtk \t\n", 215 | "SSH_AUTH_SOCK = /run/user/1000/keyring/ssh \t\n", 216 | "VTE_VERSION = 4205 \t\n", 217 | "GDMSESSION = mate \t\n", 218 | "GISRC = /home/tais/.grass7/rc \t\n", 219 | "GIT_PAGER = cat \t\n", 220 | "XDG_CONFIG_DIRS = /etc/xdg/xdg-mate:/etc/xdg \t\n", 221 | "XDG_CURRENT_DESKTOP = MATE \t\n", 222 | "XDG_SESSION_ID = c1 \t\n", 223 | "DBUS_SESSION_BUS_ADDRESS = unix:abstract=/tmp/dbus-HQg01KxGx1,guid=5adb60f2fb9cb695876d89a05a25171e \t\n", 224 | "_ = /usr/local/bin/jupyter \t\n", 225 | "XDG_SESSION_COOKIE = 8441891e86e24d76b9616edf516d5734-1512380190.44057-252399727 \t\n", 226 | "DESKTOP_SESSION = mate \t\n", 227 | "WINDOWID = 85983238 \t\n", 228 | "LESSCLOSE = /usr/bin/lesspipe %s %s \t\n", 229 | "DEFAULTS_PATH = /usr/share/gconf/mate.default.path \t\n", 230 | "MPLBACKEND = module://ipykernel.pylab.backend_inline \t\n", 231 | "MDM_XSERVER_LOCATION = local \t\n", 232 | "GTK_MODULES = gail:atk-bridge \t\n", 233 | "XDG_DATA_DIRS = /usr/share/mate:/usr/local/share/:/usr/share/:/usr/share/mdm/ \t\n", 234 | "PWD = /media/tais/data/Dropbox/ULB/MAUPP/Traitements/Landscape_metrics/r.li \t\n", 235 | "COLORTERM = mate-terminal \t\n", 236 | "PYTHONLIB = /usr/lib/python2.7 \t\n", 237 | "LS_COLORS = rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: \t\n", 238 | "PAGER = cat \t\n", 239 | "XDG_SEAT = seat0 \t\n" 240 | ] 241 | } 242 | ], 243 | "source": [ 244 | "## Display the current defined environment variables\n", 245 | "for key in os.environ.keys():\n", 246 | " print \"%s = %s \\t\" % (key,os.environ[key])" 247 | ] 248 | }, 249 | { 250 | "cell_type": "markdown", 251 | "metadata": {}, 252 | "source": [ 253 | "**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**" 254 | ] 255 | }, 256 | { 257 | "cell_type": "markdown", 258 | "metadata": {}, 259 | "source": [ 260 | "

User inputs

" 261 | ] 262 | }, 263 | { 264 | "cell_type": "code", 265 | "execution_count": 5, 266 | "metadata": { 267 | "collapsed": true 268 | }, 269 | "outputs": [], 270 | "source": [ 271 | "## Define a empty dictionnary for saving user inputs\n", 272 | "user={}" 273 | ] 274 | }, 275 | { 276 | "cell_type": "code", 277 | "execution_count": 10, 278 | "metadata": { 279 | "collapsed": true 280 | }, 281 | "outputs": [], 282 | "source": [ 283 | "## Enter the path to GRASSDATA folder\n", 284 | "user[\"gisdb\"] = \"/home/tais/Documents/GRASSDATA_Spie2017subset_Ouaga\"\n", 285 | "\n", 286 | "## Enter the name of the location (existing or for a new one)\n", 287 | "user[\"location\"] = \"SPIE_subset\"\n", 288 | "\n", 289 | "## Enter the EPSG code for this location \n", 290 | "user[\"locationepsg\"] = \"32630\"\n", 291 | "\n", 292 | "## Enter the name of the mapset to use for segmentation\n", 293 | "user[\"mapsetname\"] = \"test_rli\"" 294 | ] 295 | }, 296 | { 297 | "cell_type": "markdown", 298 | "metadata": {}, 299 | "source": [ 300 | "**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**" 301 | ] 302 | }, 303 | { 304 | "cell_type": "markdown", 305 | "metadata": {}, 306 | "source": [ 307 | "

Define functions

" 308 | ] 309 | }, 310 | { 311 | "cell_type": "markdown", 312 | "metadata": {}, 313 | "source": [ 314 | "This section of the notebook is dedicated to defining functions which will then be called later in the script. If you want to create your own functions, define them here." 315 | ] 316 | }, 317 | { 318 | "cell_type": "markdown", 319 | "metadata": {}, 320 | "source": [ 321 | "### Function for computing processing time" 322 | ] 323 | }, 324 | { 325 | "cell_type": "markdown", 326 | "metadata": {}, 327 | "source": [ 328 | "The \"print_processing_time\" is used to calculate and display the processing time for various stages of the processing chain. At the beginning of each major step, the current time is stored in a new variable, using [time.time() function](https://docs.python.org/2/library/time.html). At the end of the stage in question, the \"print_processing_time\" function is called and takes as argument the name of this new variable containing the recorded time at the beginning of the stage, and an output message." 329 | ] 330 | }, 331 | { 332 | "cell_type": "code", 333 | "execution_count": 11, 334 | "metadata": { 335 | "collapsed": true 336 | }, 337 | "outputs": [], 338 | "source": [ 339 | "## Import library for managing time in python\n", 340 | "import time \n", 341 | "\n", 342 | "## Function \"print_processing_time()\" compute processing time and printing it.\n", 343 | "# The argument \"begintime\" wait for a variable containing the begintime (result of time.time()) of the process for which to compute processing time.\n", 344 | "# The argument \"printmessage\" wait for a string format with information about the process. \n", 345 | "def print_processing_time(begintime, printmessage): \n", 346 | " endtime=time.time() \n", 347 | " processtime=endtime-begintime\n", 348 | " remainingtime=processtime\n", 349 | "\n", 350 | " days=int((remainingtime)/86400)\n", 351 | " remainingtime-=(days*86400)\n", 352 | " hours=int((remainingtime)/3600)\n", 353 | " remainingtime-=(hours*3600)\n", 354 | " minutes=int((remainingtime)/60)\n", 355 | " remainingtime-=(minutes*60)\n", 356 | " seconds=round((remainingtime)%60,1)\n", 357 | "\n", 358 | " if processtime<60:\n", 359 | " finalprintmessage=str(printmessage)+str(seconds)+\" seconds\"\n", 360 | " elif processtime<3600:\n", 361 | " finalprintmessage=str(printmessage)+str(minutes)+\" minutes and \"+str(seconds)+\" seconds\"\n", 362 | " elif processtime<86400:\n", 363 | " finalprintmessage=str(printmessage)+str(hours)+\" hours and \"+str(minutes)+\" minutes and \"+str(seconds)+\" seconds\"\n", 364 | " elif processtime>=86400:\n", 365 | " finalprintmessage=str(printmessage)+str(days)+\" days, \"+str(hours)+\" hours and \"+str(minutes)+\" minutes and \"+str(seconds)+\" seconds\"\n", 366 | " \n", 367 | " return finalprintmessage" 368 | ] 369 | }, 370 | { 371 | "cell_type": "markdown", 372 | "metadata": {}, 373 | "source": [ 374 | "### Function for creation of configuration file for r.li (landscape units provided as polygons)" 375 | ] 376 | }, 377 | { 378 | "cell_type": "code", 379 | "execution_count": 225, 380 | "metadata": { 381 | "collapsed": true 382 | }, 383 | "outputs": [], 384 | "source": [ 385 | "def create_rli_configfile(listoflandcoverraster,landscape_polygons,returnlistpath=False):\n", 386 | " # Check if 'listoflandcoverraster' is not empty\n", 387 | " if len(listoflandcoverraster)==0:\n", 388 | " sys.exit(\"The list of landcover raster is empty and should contain at least one raster name\")\n", 389 | " \n", 390 | " # Get the version of GRASS GIS \n", 391 | " version=grass.version()['version'].split('.')[0]\n", 392 | " # Define the folder to save the r.li configuration files\n", 393 | " if sys.platform==\"win32\":\n", 394 | " rli_dir=os.path.join(os.environ['APPDATA'],\"GRASS\"+version,\"r.li\")\n", 395 | " else: \n", 396 | " rli_dir=os.path.join(os.environ['HOME'],\".grass\"+version,\"r.li\")\n", 397 | " if not os.path.exists(rli_dir):\n", 398 | " os.makedirs(rli_dir) \n", 399 | " \n", 400 | " ## Create an ordered list with the 'cat' value of landscape units to be processed.\n", 401 | " list_cat=[int(x) for x in gscript.parse_command('v.db.select', quiet=True, map=landscape_polygons, column='cat', flags='c')]\n", 402 | " list_cat.sort()\n", 403 | "\n", 404 | " # Declare a empty dictionnary which will contains the north, south, east, west values for each landscape unit\n", 405 | " landscapeunit_bbox={}\n", 406 | " # Declare a empty list which will contain the path of the configation files created\n", 407 | " listpath=[]\n", 408 | " # Declare a empty string variable which will contains the core part of the r.li configuration file\n", 409 | " maskedoverlayarea=\"\"\n", 410 | " \n", 411 | " # Duplicate 'listoflandcoverraster' in a new variable called 'tmp_list'\n", 412 | " tmp_list=list(listoflandcoverraster)\n", 413 | " # Set the current landcover raster as the first of the list\n", 414 | " base_landcover_raster=tmp_list.pop(0) #The pop function return the first item of the list and delete it from the list at the same time\n", 415 | " \n", 416 | " # Loop trough the landscape units\n", 417 | " for cat in list_cat:\n", 418 | " # Extract the current landscape unit polygon as temporary vector\n", 419 | " tmp_vect=\"tmp_\"+base_landcover_raster.split(\"@\")[0]+\"_\"+landscape_polygons.split(\"@\")[0]+\"_\"+str(cat)\n", 420 | " gscript.run_command('v.extract', overwrite=True, quiet=True, input=landscape_polygons, cats=cat, output=tmp_vect)\n", 421 | " # Set region to match the extent of the current landscape polygon, with resolution and alignement matching the landcover raster\n", 422 | " gscript.run_command('g.region', vector=tmp_vect, align=base_landcover_raster)\n", 423 | " # Rasterize the landscape unit polygon\n", 424 | " landscapeunit_rast=tmp_vect[4:]\n", 425 | " gscript.run_command('v.to.rast', overwrite=True, quiet=True, input=tmp_vect, output=landscapeunit_rast, use='cat', memory='3000')\n", 426 | " # Remove temporary vector\n", 427 | " gscript.run_command('g.remove', quiet=True, flags=\"f\", type='vector', name=tmp_vect)\n", 428 | " # Set the region to match the raster landscape unit extent and save the region info in a dictionary\n", 429 | " region_info=gscript.parse_command('g.region', raster=landscapeunit_rast, flags='g')\n", 430 | " n=str(round(float(region_info['n']),5)) #the config file need 5 decimal for north and south\n", 431 | " s=str(round(float(region_info['s']),5))\n", 432 | " e=str(round(float(region_info['e']),6)) #the config file need 6 decimal for east and west\n", 433 | " w=str(round(float(region_info['w']),6))\n", 434 | " # Save the coordinates of the bbox in the dictionary (n,s,e,w)\n", 435 | " landscapeunit_bbox[cat]=n+\"|\"+s+\"|\"+e+\"|\"+w\n", 436 | " # Add the line to the maskedoverlayarea variable\n", 437 | " maskedoverlayarea+=\"MASKEDOVERLAYAREA \"+landscapeunit_rast+\"|\"+landscapeunit_bbox[cat]+\"\\n\"\n", 438 | "\n", 439 | " # Compile the content of the r.li configuration file\n", 440 | " config_file_content=\"SAMPLINGFRAME 0|0|1|1\\n\"\n", 441 | " config_file_content+=maskedoverlayarea\n", 442 | " config_file_content+=\"RASTERMAP \"+base_landcover_raster+\"\\n\"\n", 443 | " config_file_content+=\"VECTORMAP \"+landscape_polygons+\"\\n\"\n", 444 | "\n", 445 | " # Create a new file and save the content\n", 446 | " configfilename=base_landcover_raster.split(\"@\")[0]+\"_\"+landscape_polygons.split(\"@\")[0]\n", 447 | " path=os.path.join(rli_dir,configfilename)\n", 448 | " listpath.append(path)\n", 449 | " f=open(path, 'w')\n", 450 | " f.write(config_file_content)\n", 451 | " f.close()\n", 452 | " \n", 453 | " # Continue creation of r.li configuration file and landscape unit raster the rest of the landcover raster provided\n", 454 | " while len(tmp_list)>0:\n", 455 | " # Reinitialize 'maskedoverlayarea' variable as an empty string\n", 456 | " maskedoverlayarea=\"\"\n", 457 | " # Set the current landcover raster as the first of the list\n", 458 | " current_landcover_raster=tmp_list.pop(0) #The pop function return the first item of the list and delete it from the list at the same time\n", 459 | " # Loop trough the landscape units\n", 460 | " for cat in list_cat:\n", 461 | " # Define the name of the current \"current_landscapeunit_rast\" layer\n", 462 | " current_landscapeunit_rast=current_landcover_raster.split(\"@\")[0]+\"_\"+landscape_polygons.split(\"@\")[0]+\"_\"+str(cat) \n", 463 | " base_landscapeunit_rast=base_landcover_raster.split(\"@\")[0]+\"_\"+landscape_polygons.split(\"@\")[0]+\"_\"+str(cat) \n", 464 | " # Copy the the landscape unit created for the first landcover map in order to match the name of the current landcover map\n", 465 | " gscript.run_command('g.copy', overwrite=True, quiet=True, raster=(base_landscapeunit_rast,current_landscapeunit_rast))\n", 466 | " # Add the line to the maskedoverlayarea variable\n", 467 | " maskedoverlayarea+=\"MASKEDOVERLAYAREA \"+current_landscapeunit_rast+\"|\"+landscapeunit_bbox[cat]+\"\\n\"\n", 468 | " # Compile the content of the r.li configuration file\n", 469 | " config_file_content=\"SAMPLINGFRAME 0|0|1|1\\n\"\n", 470 | " config_file_content+=maskedoverlayarea\n", 471 | " config_file_content+=\"RASTERMAP \"+current_landcover_raster+\"\\n\"\n", 472 | " config_file_content+=\"VECTORMAP \"+landscape_polygons+\"\\n\"\n", 473 | "\n", 474 | " # Create a new file and save the content\n", 475 | " configfilename=current_landcover_raster.split(\"@\")[0]+\"_\"+landscape_polygons.split(\"@\")[0]\n", 476 | " path=os.path.join(rli_dir,configfilename)\n", 477 | " listpath.append(path)\n", 478 | " f=open(path, 'w')\n", 479 | " f.write(config_file_content)\n", 480 | " f.close()\n", 481 | " \n", 482 | " # Return a list of path of configuration files creates if option actived\n", 483 | " if returnlistpath:\n", 484 | " return listpath" 485 | ] 486 | }, 487 | { 488 | "cell_type": "markdown", 489 | "metadata": {}, 490 | "source": [ 491 | "### Function for creation of binary raster from a categorical raster" 492 | ] 493 | }, 494 | { 495 | "cell_type": "code", 496 | "execution_count": 306, 497 | "metadata": { 498 | "collapsed": true 499 | }, 500 | "outputs": [], 501 | "source": [ 502 | "###### Function creating a binary raster for each category of a base raster. \n", 503 | "### The function run within the current region. If a category do not exists in the current region, no binary map will be produce\n", 504 | "# 'categorical_raster' wait for the name of the base raster to be used. It is the one from which one binary raster will be produced for each category value\n", 505 | "# 'prefix' wait for a string corresponding to the prefix of the name of the binary raster which will be produced\n", 506 | "# 'setnull' wait for a boolean value (True, False) according to the fact that the output binary should be 1/0 or 1/null\n", 507 | "# 'returnlist' wait for a boolean value (True, False) regarding to the fact that a list containing the name of binary raster is desired as return of the function\n", 508 | "# 'category_list' wait for a list of interger corresponding to specific category of the base raster to be used \n", 509 | "\n", 510 | "def create_binary_raster(categorical_raster,prefix=\"binary\",setnull=False,returnlist=False,category_list=None):\n", 511 | " returnlist=[] #Declare empty list for return\n", 512 | " #gscript.run_command('g.region', raster=categorical_raster, quiet=True) #Set the region\n", 513 | " null='null()' if setnull else '0' #Set the value for r.mapcalc\n", 514 | " minclass=1 if setnull else 2 #Set the value to check if the binary raster is empty\n", 515 | " if category_list == None: #If no category_list provided\n", 516 | " category_list=[cl for cl in gscript.parse_command('r.category',map=categorical_raster,quiet=True)]\n", 517 | " for i,x in enumerate(category_list): #Make sure the format is UTF8 and not Unicode\n", 518 | " category_list[i]=x.encode('UTF8')\n", 519 | " category_list.sort(key=float) #Sort the raster categories in ascending.\n", 520 | " #Create a binary raster for the current class\n", 521 | " for cl in category_list:\n", 522 | " binary_class=prefix+\"_\"+cl\n", 523 | " gscript.run_command('r.mapcalc', expression=binary_class+'=if('+categorical_raster+'=='+str(cl)+',1,'+null+')',overwrite=True, quiet=True)\n", 524 | " if len(gscript.parse_command('r.category',map=binary_class,quiet=True))>=minclass: #Check if created binary is not empty\n", 525 | " returnlist.append(binary_class)\n", 526 | " else:\n", 527 | " gscript.run_command('g.remove', quiet=True, flags=\"f\", type='raster', name=binary_class)\n", 528 | " if returnlist:\n", 529 | " return returnlist" 530 | ] 531 | }, 532 | { 533 | "cell_type": "markdown", 534 | "metadata": {}, 535 | "source": [ 536 | "**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**" 537 | ] 538 | } 539 | ], 540 | "metadata": { 541 | "anaconda-cloud": {}, 542 | "kernelspec": { 543 | "display_name": "Python 2", 544 | "language": "python", 545 | "name": "python2" 546 | }, 547 | "language_info": { 548 | "codemirror_mode": { 549 | "name": "ipython", 550 | "version": 2 551 | }, 552 | "file_extension": ".py", 553 | "mimetype": "text/x-python", 554 | "name": "python", 555 | "nbconvert_exporter": "python", 556 | "pygments_lexer": "ipython2", 557 | "version": "2.7.12" 558 | } 559 | }, 560 | "nbformat": 4, 561 | "nbformat_minor": 1 562 | } 563 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Grippa Tais 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Street blocks features computation 2 | 3 | This repository contain the python script (in a jupyter notebook) used for computing landscapes metrics in street blocks or any other landscape unit defined as a shapefile to be provided by the user. 4 | 5 | This code was published belong to the following paper: 6 | 7 | Grippa & al. Mapping Urban Land Use at Street Block Level Using OpenStreetMap, Remote Sensing Data, and Spatial Metrics. ISPRS Int. J. Geo-Inf. 2018, 7, 246. [doi:10.3390/ijgi7070246](https://doi.org/10.3390/ijgi7070246) 8 | 9 | ## Cite this code 10 | Please use the following DOI for citing this code: [![DOI](https://zenodo.org/badge/117551665.svg)](https://zenodo.org/badge/latestdoi/117551665) 11 | 12 | 13 | ## Related code 14 | The code provided in this repository compute spatial metrics for a layer of polygons. When working on urban environment, these polygons could be, for example, street blocks in order to classify land use. Another repository provide a computer code to create street block geometries from OpenStreetMap data => [https://github.com/ANAGEO/OSM_Streetblocks_extraction](https://github.com/ANAGEO/OSM_Streetblocks_extraction). 15 | 16 | ## Outputs 17 | The code relies on GRASS GIS and mainly on the [r.li suite](https://grass.osgeo.org/grass74/manuals/r.li.html). 18 | It enable for automated creation of the r.li configuration files which otherwise should be created in the graphical user interface [more info](https://grass.osgeo.org/grass75/manuals/g.gui.rlisetup.html). 19 | 20 | The script is supposed to work with a user-provided land cover map, NDVI, NDWI, nDSM. If any of them is missing, the user would need to adapt the code. 21 | 22 | **The script will compute the following landscape metrics** 23 | 24 | Spatial metrics at the "landscape" level: 25 | - Dominance 26 | - Pielou 27 | - Renyi 28 | - Richness 29 | - Shannon 30 | - Simpson 31 | 32 | Spatial metrics at the "class" level: 33 | - *"patchnum"* : Patch number 34 | - *"patchdensity"* : Patch density 35 | - *"mps"* : Mean patch size 36 | - *"padsd"* : Stand. dev. of patch size 37 | - *"padcv"* : Patch size coef. of variation 38 | - *"padrange"* : Range of patch size 39 | - *"shape"* : Shape index 40 | - *"prop_xx"* : Proportion of the class 41 | 42 | Street blocks morphology metrics: 43 | - *"area"* : Area 44 | - *"perimeter"* : Perimeter 45 | - *"compact_circle"* : Compactness relative to a circle 46 | - *"compact_square"* : Compactness relative to a square 47 | - *"fd"* : Fractal dimention 48 | 49 | Spectral metrics: 50 | - *"ndvi_stddev"* and *"ndvi_median"* : Std. dev. and median of NDVI 51 | - *"ndwi_stddev"* and *"ndwi_median"* : Std. dev. and median of NDWI 52 | 53 | Other metrics: 54 | - *"mean_build_height"* : Mean nDSM value of built pixels 55 | - *"count_buildpixels"* : Number of built pixels in the block 56 | 57 | ## Example 58 | Here after are presented few spatial metrics computed on a land cover map and used as main features for land use classification at the streetblock level. 59 | 60 | **Land cover map** 61 | ![](illustration/Ouaga_LC.jpg) 62 | **Shannon index (landscape level)** 63 | ![](illustration/Ouaga_Shannon.jpg) 64 | **Patch density on "low elevated building" class (class level)** 65 | ![](illustration/Ouaga_Low_elevated_building_Patch_density.jpg) 66 | **Landuse classification** 67 | ![](illustration/Ouaga_landuse_classif.jpg) 68 | -------------------------------------------------------------------------------- /illustration/Ouaga_LC.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tgrippa/Street_blocks_features_computation/5d48809f8e7d12950985dc4f5a8ad1cefb5927df/illustration/Ouaga_LC.jpg -------------------------------------------------------------------------------- /illustration/Ouaga_Low_elevated_building_Patch_density.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tgrippa/Street_blocks_features_computation/5d48809f8e7d12950985dc4f5a8ad1cefb5927df/illustration/Ouaga_Low_elevated_building_Patch_density.jpg -------------------------------------------------------------------------------- /illustration/Ouaga_Shannon.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tgrippa/Street_blocks_features_computation/5d48809f8e7d12950985dc4f5a8ad1cefb5927df/illustration/Ouaga_Shannon.jpg -------------------------------------------------------------------------------- /illustration/Ouaga_landuse_classif.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tgrippa/Street_blocks_features_computation/5d48809f8e7d12950985dc4f5a8ad1cefb5927df/illustration/Ouaga_landuse_classif.jpg --------------------------------------------------------------------------------