├── .gitignore
├── LICENSE
├── README.md
├── bin
├── decomp2json
└── dessem2json
├── deckparser
├── __init__.py
├── decomp2dicts.py
├── decompdicted.py
├── decompzipped.py
├── dessem2dicts.py
├── dessem2dicts_v2.py
├── dessemsource.py
├── dessemzipped.py
├── importers
│ ├── __init__.py
│ ├── decomp
│ │ ├── __init__.py
│ │ ├── importDADGER.py
│ │ ├── importDADGNL.py
│ │ ├── importRELATO.py
│ │ └── importVAZOES.py
│ ├── dessem
│ │ ├── __init__.py
│ │ ├── areacont.py
│ │ ├── cfg
│ │ │ ├── __init__.py
│ │ │ ├── areacont.xml
│ │ │ ├── cotasr11.xml
│ │ │ ├── curvtviag.xml
│ │ │ ├── dadvaz.xml
│ │ │ ├── deflant.xml
│ │ │ ├── desselet.xml
│ │ │ ├── dessem.xml
│ │ │ ├── eletbase.xml
│ │ │ ├── eletmodif.xml
│ │ │ ├── entdados.xml
│ │ │ ├── ils_tri.xml
│ │ │ ├── infofcf.xml
│ │ │ ├── operuh.xml
│ │ │ ├── operut.xml
│ │ │ ├── ptoper.xml
│ │ │ ├── rampas.xml
│ │ │ ├── renovaveis.xml
│ │ │ ├── respot.xml
│ │ │ ├── restseg.xml
│ │ │ ├── rstlpp.xml
│ │ │ ├── simul.xml
│ │ │ ├── termdat.xml
│ │ │ └── tolperd.xml
│ │ ├── core
│ │ │ ├── __init__.py
│ │ │ ├── dataType.py
│ │ │ ├── dsFile.py
│ │ │ ├── exceptions.py
│ │ │ ├── file_decoder.py
│ │ │ ├── record.py
│ │ │ ├── table.py
│ │ │ └── xmlReader.py
│ │ ├── cotasr11.py
│ │ ├── curvtviag.py
│ │ ├── dadvaz.py
│ │ ├── deflant.py
│ │ ├── desselet.py
│ │ ├── dessem.py
│ │ ├── eletbase.py
│ │ ├── entdados.py
│ │ ├── hidr.py
│ │ ├── ils_tri.py
│ │ ├── infofcf.py
│ │ ├── loader.py
│ │ ├── operuh.py
│ │ ├── operut.py
│ │ ├── out
│ │ │ ├── __init__.py
│ │ │ ├── cfg
│ │ │ │ ├── __init__.py
│ │ │ │ ├── modif
│ │ │ │ │ ├── 18_8.json
│ │ │ │ │ ├── 19_0.json
│ │ │ │ │ └── __init__.py
│ │ │ │ ├── pdo_operacao.json
│ │ │ │ ├── pdo_sist.json
│ │ │ │ └── pdo_sumaoper.json
│ │ │ ├── pdo_base.py
│ │ │ ├── pdo_base_oper.py
│ │ │ ├── pdo_common.py
│ │ │ ├── pdo_operacao.py
│ │ │ ├── pdo_sist.py
│ │ │ ├── pdo_sumaoper.py
│ │ │ └── result_loader.py
│ │ ├── ptoper.py
│ │ ├── rampas.py
│ │ ├── renovaveis.py
│ │ ├── respot.py
│ │ ├── restseg.py
│ │ ├── rstlpp.py
│ │ ├── simul.py
│ │ ├── source.py
│ │ ├── termdat.py
│ │ ├── tolperd.py
│ │ ├── util.py
│ │ ├── util
│ │ │ ├── __init__.py
│ │ │ └── pmo.py
│ │ └── v2
│ │ │ ├── __init__.py
│ │ │ ├── cfg
│ │ │ ├── __init__.py
│ │ │ └── termdat.xml
│ │ │ └── termdat.py
│ ├── imputils.py
│ ├── newave
│ │ ├── __init__.py
│ │ ├── importCADIC.py
│ │ ├── importCADTERM.py
│ │ ├── importCAR.py
│ │ ├── importCLAST.py
│ │ ├── importCONFHD.py
│ │ ├── importCONFT.py
│ │ ├── importDGER.py
│ │ ├── importDSVAGUA.py
│ │ ├── importEXPH.py
│ │ ├── importEXPT.py
│ │ ├── importHIDR.py
│ │ ├── importMANUTT.py
│ │ ├── importMODIF.py
│ │ ├── importPATAMAR.py
│ │ ├── importREE.py
│ │ ├── importSHIST.py
│ │ ├── importSISTEMA.py
│ │ ├── importTERM.py
│ │ └── importVAZOES.py
│ └── suishi
│ │ ├── __init__.py
│ │ └── loader.py
├── newave2dicts.py
├── newavedicted.py
├── newavezipped.py
├── suishi2dicts.py
├── suishidicted.py
└── suishizipped.py
├── docs
├── Processo_de_importacao_de_um_deck_DECOMP.md
├── Processo_de_importacao_de_um_deck_DESSEM.md
└── Processo_de_importacao_de_um_deck_NEWAVE.md
├── imgs
└── logo_ped_aneel.jpg
└── setup.py
/.gitignore:
--------------------------------------------------------------------------------
1 | #Virtualenv env
2 | venv*
3 |
4 | # Byte-compiled / optimized / DLL files
5 | __pycache__/
6 | *.py[cod]
7 |
8 | # C extensions
9 | *.so
10 |
11 | # Distribution / packaging
12 | .Python
13 | env/
14 | build/
15 | develop-eggs/
16 | dist/
17 | downloads/
18 | eggs/
19 | .eggs/
20 | #lib/
21 | lib64/
22 | parts/
23 | sdist/
24 | var/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .coverage
43 | .coverage.*
44 | .cache
45 | nosetests.xml
46 | coverage.xml
47 | *,cover
48 |
49 | # Translations
50 | *.mo
51 | *.pot
52 |
53 | # Django stuff:
54 | *.log
55 |
56 | # Sphinx documentation
57 | docs/_build/
58 |
59 | # PyBuilder
60 | target/
61 |
62 | # Vagrant
63 | .vagrant
64 |
65 | *~
66 | .project
67 | .pydevproject
68 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | DeckParser: Um leitor de dados do NEWAVE, DECOMP and DESSEM
2 | =============================================
3 |
4 | 
5 | 
6 |
7 | DeckParser fornece programas para abrir e ler os dados dos programas NEWAVE, DECOMP e DESSEM
8 |
9 | * Projeto em desenvolvimento - não aberto ainda para contribuições da comunidade (pull requests)
10 |
11 | ## Dependências
12 |
13 | * Python > 3.6.x
14 | * numpy > 1.15.x (Instalado automaticamente)
15 | * unidecode > 1.0.x (Instalado automaticamente)
16 |
17 | ## Referências
18 |
19 | [Importação de Decks NEWAVE](docs/Processo_de_importacao_de_um_deck_NEWAVE.md)
20 |
21 | [Importação de Decks DECOMP](docs/Processo_de_importacao_de_um_deck_DECOMP.md)
22 |
23 | [Importação de Decks DESSEM](docs/Processo_de_importacao_de_um_deck_DESSEM.md)
24 |
25 | ## Instalação
26 |
27 | Usando pip:
28 |
29 | ```bash
30 | $ pip install git+https://git@github.com/venidera/deckparser.git
31 | ```
32 |
33 | ## Patrocinador
34 |
35 | 
36 |
37 | “Este trabalho faz parte dos projetos intitulados 'Análise de Portfólio de Usinas de Geração para Atendimento da Carga Futura do Sistema Interligado Nacional - Matriz Robusta' e 'Metodologia de Despacho Hidrotérmico Horário do Sistema Interligado Nacional', ambos financiados pela EDP no âmbito do Programa de P&D da ANEEL (PD-07267-0012/2018 e PD-07267-0011/2018).”
38 |
39 | “This research is part of the projects entitled 'Portfolio Analysis of Generation Plants to Meet Future Load of the National Interconnected System - Robust Energy Matrix' and 'Methodology of Hourly Hydrothermal Programming of the National Interconnected System', financed by EDP under the ANEEL R&D Program (PD-07267-0012/2018 and PD-07267-0011/2018).”
40 |
41 | ## Time de desenvolvimento
42 |
43 | O time de desenvolvimento é formado por membros da equipe Venidera:
44 |
45 | * [André Toscano](https://github.com/aemitos)
46 | * [João Borsói Soares](https://github.com/joaoborsoi)
47 | * [Renan de Paula Maciel](https://github.com/renanmaciel)
48 |
49 | ## Licença
50 |
51 | http://www.apache.org/licenses/LICENSE-2.0
52 |
--------------------------------------------------------------------------------
/bin/decomp2json:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | import sys,logging,json,argparse
4 | from deckparser.decomp2dicts import decomp2dicts
5 |
6 | def main():
7 | parser = argparse.ArgumentParser(
8 | description='Conversor do deck de dados do DECOMP para formato JSON'
9 | )
10 | parser.add_argument(
11 | "deck",
12 | help=("Arquivo do deck DECOMP (CCEE ou ONS), que pode conter "+
13 | "resultados (CCEE)"),
14 | type=str
15 | )
16 | parser.add_argument(
17 | "resultados",
18 | nargs='?',
19 | help="Arquivo de resultados do DECOMP (ONS)",
20 | type=str
21 | )
22 | parser.add_argument(
23 | "-s",
24 | help=("Especifica o número da semana (revisão) para ser utilizada, "+
25 | "começando por 1. Somente para deck da CCEE"),
26 | type=int
27 | )
28 | parser.add_argument(
29 | "-r",
30 | help=("Especifica o registro a ser importado. Só funciona com "+
31 | "especificação da semana também nos decks da CCEE."),
32 | type=str,
33 | choices=['TE', 'UH', 'CT', 'UE', 'DP', 'PQ', 'IT', 'IA', 'TX',
34 | 'DT', 'MP', 'VE', 'VM', 'DF', 'TI', 'MT', 'VI', 'RE',
35 | 'AC', 'HQ', 'HV', 'RI', 'RELATO', 'TG', 'GS', 'NL', 'GL']
36 | )
37 | args = parser.parse_args()
38 | # logging.basicConfig(level=logging.DEBUG)
39 |
40 | deck = decomp2dicts(
41 | fn=args.deck,
42 | r_fn=args.resultados,
43 | sem=args.s,
44 | reg=args.r
45 | )
46 | print(json.dumps(deck,indent=1))
47 |
48 | if __name__ == '__main__':
49 | main()
50 |
--------------------------------------------------------------------------------
/deckparser/__init__.py:
--------------------------------------------------------------------------------
1 | from deckparser.newave2dicts import newave2dicts
2 | from deckparser.decomp2dicts import decomp2dicts
3 | from deckparser.suishi2dicts import suishi2dicts
4 |
--------------------------------------------------------------------------------
/deckparser/decomp2dicts.py:
--------------------------------------------------------------------------------
1 | from logging import info,debug
2 |
3 | from deckparser.decompzipped import DecompZipped
4 | #from deckparser.decompdicted import DecompDicted
5 |
6 | from deckparser.importers.newave.importHIDR import importHIDR
7 | from deckparser.importers.decomp.importDADGER import importDADGER
8 | from deckparser.importers.decomp.importDADGNL import importDADGNL
9 | from deckparser.importers.decomp.importVAZOES import importVAZOES
10 | from deckparser.importers.decomp.importRELATO import importRELATO
11 |
12 | def decomp2dicts(fn,r_fn = None, sem: int = None,reg = None):
13 | """
14 | Open the zipped file and start to import data into python dicts and lists
15 | """
16 | dz = DecompZipped(fn=fn,r_fn=r_fn)
17 | if dz.zipLoaded():
18 |
19 | if sem==None:
20 | dd = []
21 | for i in range(1,dz.numSemanas()+1):
22 | HIDR, HIDRcount = importHIDR(fn=dz.extractFile(i,'HIDR'))
23 | try:
24 | r_f = dz.openFileExtData(i,'RELATO')
25 | except:
26 | r_f = None
27 |
28 | if r_f:
29 | relato = importRELATO(r_f)
30 | else:
31 | relato = None
32 |
33 | dd.append({
34 | "CASO": dz.openFileExtData(i,'CASO')[0].strip().upper(),
35 | "DADGER": importDADGER(dz.openFileExtData(i,'DADGER')),
36 | "DADGNL": importDADGNL(dz.openFileExtData(i,'DADGNL')),
37 | "VAZOES": importVAZOES(fn=dz.extractFile(i,'VAZOES'),blockSize=HIDRcount),
38 | "RELATO": relato
39 | })
40 | return dd
41 | else:
42 | if reg==None:
43 | if sem >= 1 and sem <= dz.numSemanas():
44 | HIDR, HIDRcount = importHIDR(fn=dz.extractFile(sem,'HIDR'))
45 | try:
46 | r_f = dz.openFileExtData(sem,'RELATO')
47 | except:
48 | r_f = None
49 |
50 | if r_f:
51 | relato = importRELATO(r_f)
52 | else:
53 | relato = None
54 |
55 | return {
56 | "CASO": dz.openFileExtData(sem,'CASO')[0].strip().upper(),
57 | "DADGER": importDADGER(dz.openFileExtData(sem,'DADGER')),
58 | "DADGNL": importDADGNL(dz.openFileExtData(sem,'DADGNL')),
59 | "VAZOES": importVAZOES(fn=dz.extractFile(sem,'VAZOES'),
60 | blockSize=HIDRcount),
61 | "RELATO": relato
62 |
63 | }
64 | raise ValueError("Semana inválida")
65 | elif reg=="VAZOES":
66 | HIDR, HIDRcount = importHIDR(fn=dz.extractFile(sem,'HIDR'))
67 | return importVAZOES(fn=dz.extractFile(sem,'VAZOES'),
68 | blockSize=HIDRcount)
69 | elif reg=='RELATO':
70 | return importRELATO(dz.openFileExtData(sem,'RELATO'))
71 | elif reg in ('TG','GS','NL','GL'):
72 | return importDADGNL(dz.openFileExtData(sem,'DADGNL'),reg)
73 | else:
74 | return importDADGER(dz.openFileExtData(sem,'DADGER'),reg)
75 |
76 | else:
77 | return None
78 |
--------------------------------------------------------------------------------
/deckparser/decompdicted.py:
--------------------------------------------------------------------------------
1 | from logging import info
2 |
3 |
4 | class DecompDicted(object):
5 | def __init__(self):
6 | self.DADGER = None
7 | self.VAZOES = None
8 |
9 |
--------------------------------------------------------------------------------
/deckparser/dessem2dicts.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 2 de nov de 2018
3 |
4 | @author: Renan Maciel
5 | '''
6 | from deckparser.importers.dessem.loader import Loader
7 | from deckparser.importers.dessem.out.result_loader import ResultLoader
8 | from deckparser.dessemsource import dessem_source
9 | from datetime import date
10 | import logging
11 | import traceback
12 |
13 | def dessem2dicts(fn, dia=None, rd=None, file_filter=None, interval_list=None, file_encoding=None, load_results=False, deck_version=1, pmo_date=None):
14 | return load_dessem(fn, dia, rd, file_filter, interval_list, 'dict', file_encoding, load_results, deck_version, pmo_date)
15 |
16 | def getLogger():
17 | return logging.getLogger(__name__)
18 |
19 | def load_dessem(fn, dia=None, rd=None, file_filter=None, interval_list=None, output_format=None, file_encoding=None, load_results=False, deck_version=2, pmo_date=None):
20 | dz = dessem_source(fn, load_results, pmo_date)
21 | if dz.validSource():
22 | rd = casesRede(rd)
23 | dia = casesDia(dia, dz.dias.keys())
24 | dd = {}
25 | for _d in dia:
26 | if isinstance(_d, int):
27 | d = dz.getDate(_d)
28 | if not d:
29 | getLogger().warning('Day not indexed: %s', str(_d))
30 | continue
31 | else: d = _d
32 |
33 | for r in rd:
34 | if r not in [True,False]:
35 | getLogger().warning('Invalid grid option (use bool True/False): %s', str(r))
36 | elif d in dz.dias and r in dz.dias[d]:
37 | if d not in dd: dd[d] = {}
38 | try:
39 | if load_results:
40 | dt = load_dessem_result(dz, d, r, file_filter, file_encoding, output_format)
41 | else:
42 | dt = load_dessem_case(dz, d, r, file_filter, interval_list, file_encoding, output_format, deck_version)
43 | except Exception as exc:
44 | print(traceback.format_exc())
45 | print(exc)
46 | if load_results:
47 | getLogger().error('Failed to load results for case: %s %s', str(d), optGridToStr(r))
48 | else:
49 | getLogger().error('Failed to load case data: %s %s', str(d), optGridToStr(r))
50 | continue
51 | if dt:
52 | dd[d][r] = dt
53 | else:
54 | if isinstance(d, date):
55 | getLogger().warning('Case not indexed: %s %s', str(d), optGridToStr(r))
56 | else:
57 | getLogger().warning('Invalid date object (use datetime.date): %s', str(d))
58 | return dd
59 | else:
60 | return None
61 |
62 | def optGridToStr(r):
63 | return ('Com Rede' if r else 'Sem Rede')
64 |
65 | def load_dessem_case(dz, d, r, file_filter=None, interval_list=None, enc=None, fmt=None, deck_version=2):
66 | rd = optGridToStr(r)
67 | getLogger().info('Loading case for date %s %s', str(d), str(rd))
68 | try:
69 | dr = dz.extractAllFiles(d, r)
70 | except:
71 | getLogger().warning('Could not open case: %s %s', str(d), str(r))
72 | return None
73 | ld = Loader(dr, enc, deck_version=deck_version)
74 | ld.setFileFilter(file_filter)
75 | ld.loadAll(interval_list)
76 | if not fmt:
77 | return ld
78 | else:
79 | return ld.getData(fmt)
80 |
81 | def load_dessem_result(dz, d, r, file_filter=None, enc=None, fmt=None):
82 | rd = optGridToStr(r)
83 | getLogger().info('Loading results for date %s %s', str(d), str(rd))
84 | ld = ResultLoader(None, enc)
85 | ld.setFileFilter(file_filter)
86 | try:
87 | dr = dz.extractFiles(d, r, ld.getFileList())
88 | except:
89 | getLogger().warning('Could not open results: %s %s', str(d), str(r))
90 | return None
91 | ld.setDirDS(dr)
92 | ld.loadAll()
93 | if not fmt:
94 | return ld
95 | else:
96 | return ld.getData(fmt)
97 |
98 | def casesRede(rd):
99 | if rd is None:
100 | return [True,False]
101 | if isinstance(rd, list):
102 | return rd
103 | return [rd]
104 |
105 | def casesDia(dia, df):
106 | if dia is None:
107 | return df
108 | if isinstance(dia, list):
109 | return dia
110 | return [dia]
111 |
--------------------------------------------------------------------------------
/deckparser/dessem2dicts_v2.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.source import case_desc, dessem_source
2 | from deckparser.importers.dessem.out.result_loader import ResultLoader
3 | from deckparser.importers.dessem.loader import Loader
4 | from collections import defaultdict
5 | from datetime import date
6 | import traceback
7 | import logging
8 |
9 | def dessem2dicts(fn, dia=None, rd=None, file_filter=None, interval_list=None, file_encoding=None, load_results=False, deck_version=2, pmo_date=None):
10 | return load_dessem(fn, dia, rd, file_filter, interval_list, 'dict', file_encoding, load_results, deck_version, pmo_date)
11 |
12 | def load_dessem(fn, dia=None, rd=None, file_filter=None, interval_list=None, output_format=None, file_encoding=None, load_results=False, deck_version=2, pmo_date=None):
13 | dz = dessem_source(fn)
14 | if not dz.valid_source():
15 | return None
16 |
17 | available_dates = dz.available_dates()
18 | rd = casesRede(rd)
19 | dia = casesDia(dia, available_dates)
20 | dd = defaultdict(dict)
21 | for _d in dia:
22 | if isinstance(_d, int):
23 | d = dz.get_date(_d)
24 | if not d:
25 | getLogger().warning('Day not indexed: %s', str(_d))
26 | continue
27 | else: d = _d
28 |
29 | for r in rd:
30 | if r not in [True,False]:
31 | getLogger().warning('Invalid grid option (use bool True/False): %s', str(r))
32 | elif d in available_dates and r in dz.available_grid_options(d):
33 | try:
34 | if load_results:
35 | dt = load_dessem_result(dz, d, r, file_filter, file_encoding, output_format)
36 | else:
37 | dt = load_dessem_case(dz, d, r, file_filter, interval_list, file_encoding, output_format, deck_version)
38 | except Exception as exc:
39 | print(traceback.format_exc())
40 | print(exc)
41 | if load_results:
42 | getLogger().error('Failed to load results for case: %s', case_desc(d,r))
43 | else:
44 | getLogger().error('Failed to load case data: %s', case_desc(d,r))
45 | continue
46 | if dt:
47 | dd[d][r] = dt
48 | else:
49 | if isinstance(d, date):
50 | getLogger().warning('Case not indexed: %s', case_desc(d,r))
51 | else:
52 | getLogger().warning('Invalid date object (use datetime.date): %s', str(d))
53 | return dd
54 |
55 | def load_dessem_case(dz, d, r, file_filter=None, interval_list=None, enc=None, fmt=None, deck_version=2):
56 | getLogger().info('Loading case for date %s', case_desc(d,r))
57 | try:
58 | dr = dz.make_available(d, r, result_flag=False)
59 | except:
60 | getLogger().warning('Could not open case: %s', case_desc(d,r))
61 | return None
62 | ld = Loader(dr, enc, deck_version=deck_version)
63 | ld.setFileFilter(file_filter)
64 | ld.loadAll(interval_list)
65 | if not fmt:
66 | return ld
67 | else:
68 | return ld.getData(fmt)
69 |
70 | def load_dessem_result(dz, d, r, file_filter=None, enc=None, fmt=None):
71 | getLogger().info('Loading results for date %s', case_desc(d,r))
72 | ld = ResultLoader(None, enc)
73 | ld.setFileFilter(file_filter)
74 | try:
75 | dr = dz.make_available(d, r,
76 | result_flag=True,
77 | file_list=ld.getFileList())
78 | except:
79 | getLogger().exception('Could not open results: %s', case_desc(d,r))
80 | return None
81 | ld.setDirDS(dr)
82 | ld.loadAll()
83 | if not fmt:
84 | return ld
85 | else:
86 | return ld.getData(fmt)
87 |
88 | def casesRede(rd):
89 | if rd is None:
90 | return [True,False]
91 | if isinstance(rd, list):
92 | return rd
93 | return [rd]
94 |
95 | def casesDia(dia, df):
96 | if dia is None:
97 | return df
98 | if isinstance(dia, list):
99 | return dia
100 | return [dia]
101 |
102 | def getLogger():
103 | return logging.getLogger(__name__)
104 |
--------------------------------------------------------------------------------
/deckparser/dessemzipped.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 2 de nov de 2018
3 |
4 | @author: Renan Maciel
5 | '''
6 | import zipfile, os, re, shutil
7 | from uuid import uuid4 as hasher
8 | from datetime import date
9 | import logging
10 |
11 | class DessemZipped(object):
12 | def __init__(self, fn=None):
13 | # arquivo zipado que sera aberto
14 | self.z = None
15 | self.dias = dict()
16 | self.dirname = None
17 | self.zipfilename = None
18 | self.filename = None
19 | self.fhash = None
20 | self.internal_dir = None
21 | if fn:
22 | self.setZipFile(fn)
23 | self.setFilePattern(1)
24 | self.openZip()
25 | if len(self.dias) == 0:
26 | self.setFilePattern(2)
27 | self.openZip()
28 | else:
29 | self.fn = None
30 |
31 | def getLogger(self):
32 | return logging.getLogger(__name__)
33 |
34 | def __del__(self):
35 | for d in self.dias:
36 | for r in self.dias[d]:
37 | if self.dias[d][r]['zip'] is not None:
38 | self.closeDia(d, r)
39 | if self.z:
40 | self.z.close()
41 |
42 | def zipLoaded(self):
43 | if self.z:
44 | return True
45 | else:
46 | return False
47 |
48 | def setZipFile(self,fn):
49 | self.fn = fn
50 |
51 | def setFilePattern(self, cod):
52 | if cod == 1:
53 | self.fileReExpr = "DES_CCEE_([0-9]{4})([0-9]{2})([0-9]{2})_(Sem|Com)Rede.zip"
54 | self.fileParseFunc = DessemZipped.parseFileNamePat1
55 | elif cod == 2:
56 | self.fileReExpr = "DS_CCEE_([0-9]{2})([0-9]{4})_(SEM|COM)REDE_RV([0-9]{1})D([0-9]{2}).zip"
57 | self.fileParseFunc = DessemZipped.parseFileNamePat2
58 | else:
59 | raise ValueError('Invalid file pattern code: '+str(cod))
60 |
61 | @staticmethod
62 | def parseFileNamePat1(rr):
63 | r = True if rr.group(4) == 'Com' else False
64 | return {'ano': int(rr.group(1)), 'mes': int(rr.group(2)), 'dia': int(rr.group(3)), 'rede': r}
65 |
66 | @staticmethod
67 | def parseFileNamePat2(rr):
68 | r = True if rr.group(3) == 'COM' else False
69 | d = int(rr.group(5))
70 | m = int(rr.group(1))
71 | rv = int(rr.group(4))
72 | if rv == 0 and d > 20:
73 | m = m - 1
74 | elif rv > 3 and d < 10:
75 | m = m + 1
76 | return {'ano': int(rr.group(2)), 'mes': m, 'dia': d, 'rede': r, 'rv': rv}
77 |
78 | def openZip(self):
79 | if zipfile.is_zipfile(self.fn):
80 | self.z = zipfile.ZipFile(self.fn, 'r')
81 | real_path = os.path.realpath(self.fn)
82 | self.dirname = os.path.dirname(real_path)
83 | self.zipfilename = real_path.split("/")[-1]
84 | self.filename = self.zipfilename.split(".")[-2]
85 | self.fhash = str(hasher())
86 | for fn in self.z.namelist():
87 | rr = re.match(self.fileReExpr,fn)
88 | if rr is None:
89 | self.getLogger().info('File ignored: %s', fn)
90 | continue
91 | self.getLogger().info('File indexed: %s', fn)
92 | ps = self.fileParseFunc(rr)
93 | d = date(ps['ano'], ps['mes'], ps['dia'])
94 | r = ps['rede']
95 | if d:
96 | if d not in self.dias: self.dias[d] = {}
97 | self.dias[d][r] = {
98 | 'filename': fn,
99 | 'zip': None,
100 | 'tmpdir': None,
101 | 'filelist': dict()
102 | }
103 | else:
104 | self.getLogger().error('%s is not a zip file', self.fn)
105 |
106 | def getDate(self, dia):
107 | for d in self.dias:
108 | if d.day == dia:
109 | return d
110 |
111 | def printIndex(self):
112 | print('\nAvailable cases\n')
113 | itm = []
114 | for d in self.dias:
115 | for r in self.dias[d]:
116 | itm.append((d,r))
117 | itm.sort()
118 | for i in itm:
119 | (d, r) = i
120 | rd = 'Com rede' if r else 'Sem rede'
121 | print(d.strftime('%d/%b/%Y') + ', ' + rd)
122 |
123 | def extractAllFiles(self,dia,r):
124 | try:
125 | d = self.dias[dia][r]
126 | if d['zip'] is None:
127 | self.openDia(dia, r)
128 | for f in d['filelist']:
129 | fname = d['filelist'][f]
130 | z = d['zip']
131 | z.extract(fname, d['tmpdir'])
132 | return d['tmpdir']
133 | except:
134 | rd = 'Com rede' if r else 'Sem rede'
135 | self.getLogger().warning('Error unziping file %s, case: %s %s', f, str(dia), str(rd))
136 | raise
137 |
138 | def openDia(self, dia, r):
139 | try:
140 | if dia not in self.dias:
141 | raise ValueError('Date not indexed: '+str(dia))
142 | if r not in self.dias[dia]:
143 | raise ValueError('Grid option not available: '+str(r))
144 |
145 | d = self.dias[dia][r]
146 | if d['zip'] is not None:
147 | return
148 |
149 | tmpdir = os.path.join(self.dirname, 'temp')
150 | if not os.path.exists(tmpdir):
151 | os.mkdir(tmpdir)
152 | rd = 'ComRede' if r else 'SemRede'
153 | tmpdir = os.path.join(tmpdir, self.fhash+'_dia'+str(dia)+rd)
154 | os.mkdir(tmpdir)
155 |
156 | fname = d['filename']
157 | self.z.extract(fname, tmpdir)
158 |
159 | fPath = os.path.join(tmpdir, fname)
160 | d['zip'] = zipfile.ZipFile(fPath,'r')
161 | for fn in d['zip'].namelist():
162 | f = fn.split('.')[0].upper()
163 | d['filelist'][f] = fn
164 | d['tmpdir'] = tmpdir
165 | return tmpdir
166 | except:
167 | self.getLogger().error('Error opening day: %s', str(dia))
168 | raise
169 |
170 | def closeDia(self, dia, r):
171 | try:
172 | d = self.dias[dia][r]
173 | tmpdir = d['tmpdir']
174 | del d['zip']
175 | d['zip'] = None
176 | d['filelist'] = dict()
177 | d['tmpdir'] = None
178 | shutil.rmtree(tmpdir)
179 | except:
180 | self.getLogger().warning('Error closing day: %s', str(dia))
181 | raise
182 |
--------------------------------------------------------------------------------
/deckparser/importers/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/decomp/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/decomp/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/decomp/importDADGNL.py:
--------------------------------------------------------------------------------
1 | from logging import info,debug
2 |
3 | def importDADGNL(data, reg=None):
4 | numPatamares = 3
5 | DADGNL = {
6 | 'TG': dict(), 'GS': dict(), 'NL': dict(), 'GL': dict()
7 | }
8 |
9 | lineNum = 0
10 | for line in data:
11 | lineNum += 1
12 | if line[0]=='&':
13 | continue
14 |
15 | id = line[0:2].strip()
16 |
17 | if reg != None and id != reg:
18 | continue
19 |
20 | try:
21 | if id == 'TG':
22 | importTG(line,DADGNL['TG'])
23 | elif id == 'GS':
24 | importGS(line,DADGNL['GS'])
25 | elif id == 'NL':
26 | importNL(line,DADGNL['NL'])
27 | elif id == 'GL':
28 | importGL(line,DADGNL['GL'])
29 | except ValueError as e:
30 | info(str(e) + " linha: " + str(lineNum))
31 |
32 | if reg != None:
33 | return DADGNL[reg]
34 | else:
35 | return DADGNL
36 |
37 | def importTG(line,TG):
38 | codUte = int(line[4:7].strip())
39 | estagio = int(line[24:26].strip())
40 |
41 | if codUte not in TG:
42 | TG[codUte] = dict()
43 |
44 | TG[codUte][estagio] = {
45 | 'GtMin': [float(line[29:34].strip()),
46 | float(line[49:54].strip()),
47 | float(line[69:74].strip())],
48 | 'PotEfe': [float(line[34:39].strip()),
49 | float(line[54:59].strip()),
50 | float(line[74:79].strip())],
51 | 'Custo': [float(line[39:49].strip()),
52 | float(line[59:69].strip()),
53 | float(line[79:89].strip())],
54 | }
55 |
56 | def importGS(line,GS):
57 | iMes = int(line[4:6].strip())
58 | semanas = int(line[9])
59 |
60 | GS[iMes] = semanas
61 |
62 | def importNL(line,NL):
63 | codUte = int(line[4:7].strip())
64 | lag = int(line[14])
65 | NL[codUte] = lag
66 |
67 | def importGL(line,GL):
68 | codUte = int(line[4:7].strip())
69 | estagio = int(line[14:16].strip())
70 |
71 | if codUte not in GL:
72 | GL[codUte] = dict()
73 |
74 | GL[codUte][estagio] = {
75 | 'Geracao': [float(line[19:29].strip()),
76 | float(line[34:44].strip()),
77 | float(line[49:59].strip())],
78 | 'Duracao': [float(line[29:34].strip()),
79 | float(line[44:49].strip()),
80 | float(line[59:64].strip())],
81 | 'Inicio': {
82 | 'Dia': int(line[65:67].strip()),
83 | 'Mes': int(line[67:69].strip()),
84 | 'Ano': int(line[69:73].strip())
85 | }
86 | }
87 |
--------------------------------------------------------------------------------
/deckparser/importers/decomp/importVAZOES.py:
--------------------------------------------------------------------------------
1 | from numpy import dtype, fromfile, int32, float32
2 |
3 | def importVAZOES(fn,blockSize):
4 | vazoes = dict()
5 | vazaoDef = [('dado',int32)]
6 | vazaoDtype = dtype(vazaoDef)
7 |
8 | vazaoData = fromfile(fn, dtype=vazaoDtype, sep="")
9 |
10 | # registro 1 - número de postos, número de estágios e aberturas de cada estágio
11 | vazoes["numPostos"] = int(vazaoData[0]['dado'])
12 | vazoes["numEstagios"] = int(vazaoData[1]['dado'])
13 | vazoes["aberturas"] = []
14 |
15 | # número de aberturas do último estágio
16 | for i in range(vazoes["numEstagios"]):
17 | if vazaoData[2+i]['dado'] != 1 and i != vazoes["numEstagios"]-1:
18 | raise RuntimeError("Número de aberturas diferente de 1 do estágio "+str(i+1))
19 | vazoes["aberturas"].append(int(vazaoData[2+i]['dado']))
20 |
21 |
22 | # registro 2 - postos para cada usina considerada
23 | vazoes["codUhe"] = []
24 | for i in range(vazoes["numPostos"]):
25 | vazoes["codUhe"].append(int(vazaoData[blockSize+i]['dado']))
26 |
27 |
28 | # registro 3 - número de semanas completas do estudo, nro de dias excluídos do estagio \
29 | # seguinte ao mes inicial,
30 |
31 | vazoes["semanasCompletas"] = int(vazaoData[blockSize*2]['dado'])
32 | vazoes["numDiasExcl"] = int(vazaoData[blockSize*2+1]['dado'])
33 | vazoes["mesIni"] = int(vazaoData[blockSize*2+2]['dado'])
34 | vazoes["anoIni"] = int(vazaoData[blockSize*2+3]['dado'])
35 |
36 | # registro 4 - probabilidades (pula)
37 |
38 | # registro 5 - vazoes para cada posto
39 | skip = int(vazoes["aberturas"][vazoes["numEstagios"]-1]/blockSize+1)*blockSize + 3*blockSize;
40 |
41 | pos = 0
42 | vazoes["prevSem"] = []
43 | for estagio in range(1,vazoes["semanasCompletas"]+1):
44 | vazoes["prevSem"].append([])
45 | for posto in range(1,blockSize+1):
46 | vazoes["prevSem"][estagio-1].append(int(vazaoData[skip+pos]['dado']))
47 | pos += 1
48 |
49 | return vazoes
50 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/__init__.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | logging.basicConfig(level=logging.WARNING)
--------------------------------------------------------------------------------
/deckparser/importers/dessem/areacont.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 4 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class areacont(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'areacont.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | modo = None
19 | with self.openDSFile(fileName) as f:
20 | for line in f:
21 | nRec = nRec + 1
22 |
23 | if record.isComment(line) or record.isBlankLine(line):
24 | continue
25 | if record.assertString(line, 'FIM'):
26 | continue
27 | if record.assertString(line, '9999'):
28 | break
29 |
30 | if record.assertString(line, 'AREA'):
31 | modo = 'AREA'
32 | elif record.assertString(line, 'USINA'):
33 | modo = 'USINA'
34 | elif modo == 'AREA':
35 | self.getTable('AREA').parseLine(line)
36 | elif modo == 'USINA':
37 | self.getTable('USINA').parseLine(line)
38 | f.close()
39 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/cfg/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/areacont.xml:
--------------------------------------------------------------------------------
1 |
2 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/cotasr11.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/curvtviag.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/dadvaz.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/deflant.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/desselet.xml:
--------------------------------------------------------------------------------
1 |
2 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/dessem.xml:
--------------------------------------------------------------------------------
1 |
2 |
7 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/eletbase.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
106 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
131 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/eletmodif.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
106 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
131 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/ils_tri.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 |
138 |
139 |
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 |
197 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/infofcf.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/operuh.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
78 |
79 |
80 |
82 |
83 |
84 |
86 |
87 |
88 |
90 |
91 |
92 |
93 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/operut.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/ptoper.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/rampas.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/renovaveis.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
15 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/respot.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/restseg.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/rstlpp.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/simul.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/termdat.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cfg/tolperd.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/core/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/core/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/core/dataType.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 23 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.exceptions import ValidationException
7 |
8 | def parseDataType(v, t):
9 | if t == 'int' or t in ['h', 'd', 'ds', 'm', 'a', 'bin']:
10 | return int(v)
11 | if t == 'real':
12 | try:
13 | return float(v)
14 | except ValueError:
15 | if v == '.':
16 | return 0.0
17 | raise
18 | return v
19 |
20 | def validateDataType(v, t):
21 | if t == 'h':
22 | validateHour(v)
23 | if t == 'd':
24 | validateDay(v)
25 | if t == 'ds':
26 | validateWeekday(v)
27 | if t == 'm':
28 | validateMonth(v)
29 | if t == 'a':
30 | validateYear(v)
31 | if t == 'bin':
32 | validateBin(v)
33 |
34 | def validateHour(v):
35 | if v < 0 or v > 24:
36 | raise ValidationException('hour', v, [0,24], 'between')
37 |
38 | def validateDay(v):
39 | if v < 1 or v > 31:
40 | raise ValidationException('day', v, [1,31], 'between')
41 |
42 | def validateWeekday(v):
43 | if v < 1 or v > 7:
44 | raise ValidationException('weeak day', v, [1,7], 'between')
45 |
46 | def validateMonth(v):
47 | if v < 1 or v > 12:
48 | raise ValidationException('month', v, [1,12], 'between')
49 |
50 | def validateYear(v):
51 | if v < 1:
52 | raise ValidationException('year', v, [1], '>')
53 |
54 | def validateBin(v):
55 | if v not in [0, 1]:
56 | raise ValidationException('bin', v, [0,1], 'in')
57 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/core/dsFile.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 12 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.xmlReader import xmlReader
7 | from deckparser.importers.dessem.core.file_decoder import FileDecoder
8 | import deckparser.importers.dessem.cfg as cfg
9 |
10 | class dsFile:
11 | def __init__(self):
12 | self.records = {}
13 | self.tables = {}
14 | self.fileEncoding = None
15 | self.recFilter = None
16 | cfg = self.__getConfig()
17 | if 'xml' in cfg:
18 | self.loadConfig(cfg['xml'])
19 | else:
20 | raise ValueError('Missing xml config file')
21 |
22 | def isEmpty(self):
23 | for k in self.records:
24 | if not self.records[k].isEmpty():
25 | return False
26 | for k in self.tables:
27 | if not self.tables[k].isEmpty():
28 | return False
29 | return True
30 |
31 | def setEncoding(self, e):
32 | self.fileEncoding = e
33 |
34 | def openDSFile(self, fn):
35 | return FileDecoder(fn, preferred_encodings=self.fileEncoding)
36 |
37 | def listRecords(self):
38 | r = []
39 | for n in self.records:
40 | r.append(n)
41 | for n in self.tables:
42 | r.append(n)
43 | return r
44 |
45 | def setRecFilter(self, recList):
46 | self.recFilter = recList
47 |
48 | def filterRec(self, r):
49 | if self.recFilter is None:
50 | return True
51 | if r in self.recFilter:
52 | return True
53 | return False
54 |
55 | def toDict(self, df=True):
56 | ds = {}
57 | for k in self.records:
58 | if not self.filterRec(k):
59 | continue
60 | r = self.records[k]
61 | ds[k] = r.toDict(df)
62 | for k in self.tables:
63 | if not self.filterRec(k):
64 | continue
65 | t = self.tables[k]
66 | ds[k] = t.toDict(df)
67 | return ds
68 |
69 | def getConfigPath(self):
70 | return cfg.__path__[0]
71 |
72 | def loadConfig(self, fileName):
73 | xmlReader(self.getConfigPath()).decodeDsFile(self, fileName)
74 |
75 | def addRec(self, name, r):
76 | self.records[name] = r
77 |
78 | def addTable(self, name, r):
79 | self.tables[name] = r
80 |
81 | def getRec(self, name):
82 | return self.records.get(name)
83 |
84 | def getTable(self, name):
85 | return self.tables.get(name)
86 |
87 | def clearData(self):
88 | for n in self.records:
89 | self.records[n].clear()
90 | for n in self.tables:
91 | self.tables[n].clear()
92 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/core/exceptions.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 9 de nov de 2018
3 |
4 | @author: Renan Maciel
5 | '''
6 |
7 | class ValidationException(Exception):
8 | def __init__(self, vType, value, valid, condition):
9 | self.type = vType
10 | self.value = value
11 | self.valid = valid
12 | self.condition = condition
13 |
14 | def __str__(self):
15 | return 'Validation exception (type: {:s}, value = {:s}, valid: {:s} {:s})'.format(
16 | self.type, str(self.value), str(self.condition), str(self.valid))
17 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/core/file_decoder.py:
--------------------------------------------------------------------------------
1 | from chardet.universaldetector import UniversalDetector
2 | import chardet
3 | import logging
4 |
5 | def getLogger():
6 | return logging.getLogger(__name__)
7 |
8 | class FileDecoder:
9 | def __init__(self, file_path, preferred_encodings=None, confidence_threshold=0.8):
10 | self.file_path = file_path
11 | self.file = None
12 | self.detector = UniversalDetector()
13 | self.preferred_encodings = preferred_encodings
14 | self.confidence_threshold = confidence_threshold
15 |
16 | def __enter__(self):
17 | self.file = open(self.file_path, 'rb')
18 | return self
19 |
20 | def __exit__(self, _type, _value, _traceback):
21 | self.close()
22 |
23 | def __iter__(self):
24 | return self
25 |
26 | def __next__(self):
27 | line = self.file.readline()
28 | if not line:
29 | raise StopIteration()
30 | cd = chardet.detect(line)
31 | enc = cd['encoding']
32 | if enc is None:
33 | enc = 'utf-8'
34 | cf = cd['confidence']
35 | if not self.detector.done:
36 | self.detector.feed(line)
37 | try:
38 | s = str(line, enc)
39 | if cf < self.confidence_threshold:
40 | getLogger().debug('Low confidence level ({:.2f}) for encoding "{:s}" reading "{}"'.
41 | format(cf, enc, s.strip()))
42 | return self.__alternative_decode(line)
43 | return s
44 | except UnicodeDecodeError:
45 | getLogger().debug('Could not read line "{}" with detected encoding "{:s}"'.format(line, enc))
46 | s = self.__alternative_decode(line)
47 | if not s:
48 | raise
49 | return s
50 |
51 | def get_encodings(self):
52 | encondings = []
53 | if self.preferred_encodings:
54 | encondings.extend(self.preferred_encodings)
55 | if self.detector.done:
56 | d_enc = self.detector.result['encoding']
57 | if d_enc not in encondings:
58 | encondings.insert(0, d_enc)
59 | return encondings
60 |
61 | def __alternative_decode(self, line):
62 | for enc in self.get_encodings():
63 | try:
64 | s = str(line, enc)
65 | getLogger().debug('Line "{}" read successfully with encoding: {:s}'.format(s.strip(), enc))
66 | return s
67 | except UnicodeDecodeError:
68 | getLogger().debug('Could not read line "{}" with encoding: {:s}'.format(line, enc))
69 | continue
70 | getLogger().warn('Failed to read line "{}", encodings: {:s}'.
71 | format(line, str([enc] + self.get_encodings())))
72 |
73 | def close(self):
74 | if self.file:
75 | self.file.close()
76 | self.file = None
77 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/core/table.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 4 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.record import record
7 |
8 |
9 | class table:
10 |
11 | def __init__(self, recMap):
12 | self.rec = record(recMap)
13 | self.clear()
14 |
15 | def isEmpty(self):
16 | return len(self.dataSet) == 0
17 |
18 | def toDict(self, df=True):
19 | lines = self.getData(False)
20 | lst = []
21 | for ln in lines:
22 | lst.append(self.rec.lineToDict(ln, df))
23 | return lst
24 |
25 | def clear(self):
26 | self.dataSet = []
27 | self.lineSet = []
28 |
29 | def addField(self, name, cfg):
30 | self.rec.addField(name, cfg)
31 |
32 | def setRange(self, key, r):
33 | self.rec.setRange(key, r)
34 |
35 | def setField(self, key, v):
36 | self.dataSet[len(self.dataSet)-1][key] = v
37 |
38 | def getDataDefault(self):
39 | ds = []
40 | for r in self.dataSet:
41 | ds.append(self.rec.applyDefault(r))
42 | return ds
43 |
44 | def applyDefault(self, r):
45 | return self.rec.applyDefault(r)
46 |
47 | def getData(self, applyDefault=True):
48 | if applyDefault:
49 | return self.getDataDefault()
50 | return self.dataSet
51 |
52 | def getField(self, key):
53 | return self.dataSet[len(self.dataSet)-1][key]
54 |
55 | def parseLine(self, line):
56 | r = self.rec.parse(line)
57 | self.dataSet.append(r)
58 | self.lineSet.append(line)
59 | return r
60 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/core/xmlReader.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 23 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dataType import parseDataType
7 | from deckparser.importers.dessem.core.record import record
8 | from deckparser.importers.dessem.core.table import table
9 | import xml.etree.ElementTree as ET
10 | from collections import OrderedDict
11 | import os
12 |
13 | class xmlReader:
14 | def __init__(self, configPath):
15 | self.configPath = configPath
16 |
17 | def decodeDsFile(self, df, fileName):
18 | fullPath = os.path.join(self.configPath, fileName)
19 | tree = ET.parse(fullPath)
20 | rootNode = tree.getroot()
21 |
22 | #name = rootNode.attrib['name']
23 | for child in rootNode:
24 | name, rec = self.decodeRec(child)
25 | if child.tag == 'record':
26 | df.addRec(name, record(rec))
27 | elif child.tag == 'table':
28 | df.addTable(name, table(rec))
29 |
30 | def decodeRec(self, node):
31 | rec = OrderedDict()
32 | name = node.attrib['name']
33 | rec['__csv__'] = bool(node.attrib.get('csv', False))
34 |
35 | for child in node:
36 | if child.tag == 'field':
37 | try:
38 | nf, field = self.decodeField(child)
39 | except KeyError:
40 | print('Error reading field "{:s}" in record "{:s}"'.format(child.attrib['name'], name))
41 | raise
42 | rec[nf] = field
43 | return name, rec
44 |
45 | def decodeField(self, node, cpsField=False):
46 | f = OrderedDict()
47 | att = node.attrib
48 | name = att['name']
49 |
50 | if 'special' in att:
51 | f['special'] = self.decodeList(att['special'], None)
52 |
53 | if 'composed' in att and att['composed'] == 'True':
54 | f['composed'] = True
55 | f['position'] = int(att['position'])
56 | f['refField'] = att['refField']
57 | caseSet = OrderedDict()
58 | for setNode in node.iter('set'):
59 | for caseNode in setNode:
60 | val = caseNode.attrib['value']
61 | caseSet[val] = []
62 | for fdNode in caseNode:
63 | nd, fd = self.decodeField(fdNode, cpsField=True)
64 | fd['name'] = nd
65 | caseSet[val].append(fd)
66 | f['set'] = caseSet
67 | else:
68 | f['type'] = att['type']
69 | if 'default' in att:
70 | f['default'] = parseDataType(att['default'], f['type'])
71 |
72 | if cpsField:
73 | f['size'] = int(att['size'])
74 | else:
75 | c = att['c']
76 | if 'cf' in att:
77 | cf = att['cf']
78 | f['range'] = [int(c), int(cf)]
79 | else:
80 | f['range'] = [int(c), int(c)]
81 |
82 | for child in node:
83 | if child.tag == 'validate':
84 | f['validate'] = self.decodeValidation(child, f['type'])
85 |
86 | return name, f
87 |
88 | def decodeList(self, v, t):
89 | if v.find(';') < 0:
90 | return [v]
91 | sList = v.split(';')
92 | vList = []
93 | for s in sList:
94 | vList.append(parseDataType(s, t))
95 | return vList
96 |
97 | def decodeValidation(self, node, vType):
98 | f = dict()
99 | att = node.attrib
100 | if 'value' in att:
101 | f['value'] = parseDataType(att['value'], vType)
102 | if 'range' in att:
103 | f['range'] = self.decodeList(att['range'], vType)
104 | if 'list' in att:
105 | f['list'] = self.decodeList(att['list'], vType)
106 | if 'min' in att:
107 | f['min'] = parseDataType(att['min'], vType)
108 | return f
109 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/cotasr11.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 22 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class cotasr11(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'cotasr11.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isEOF(line):
23 | break
24 | if record.isComment(line) or record.isBlankLine(line):
25 | continue
26 |
27 | self.getTable('Cotas').parseLine(line)
28 | f.close()
--------------------------------------------------------------------------------
/deckparser/importers/dessem/curvtviag.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 22 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class curvtviag(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'curvtviag.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isEOF(line):
23 | break
24 | if record.isComment(line) or record.isBlankLine(line):
25 | continue
26 | tab = self.getTable('LN')
27 | if tab is not None:
28 | self.getTable('LN').parseLine(line)
29 | else:
30 | tab = self.getTable('CURVTV')
31 | self.getTable('CURVTV').parseLine(line)
32 | f.close()
33 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/dadvaz.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 4 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class dadvaz(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'dadvaz.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isEOF(line):
23 | break
24 | if nRec == 10:
25 | self.getRec('DataHora').parse(line)
26 | if nRec == 13:
27 | self.getRec('Cabecalho').parse(line)
28 | if nRec >= 17:
29 | self.getTable('Vazoes').parseLine(line)
30 | f.close()
31 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/deflant.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 4 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class deflant(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'deflant.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isComment(line) or record.isBlankLine(line):
23 | continue
24 |
25 | self.getTable('DEFANT').parseLine(line)
26 | f.close()
27 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/desselet.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 5 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class desselet(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'desselet.xml'}
15 |
16 | def isComment(self, line):
17 | return line[0] == '('
18 |
19 | def isTableEnd(self, line):
20 | return record.assertString(line, '9999') or record.assertString(line, '99999')
21 |
22 | def readDSFile(self, fileName):
23 | nRec = 0
24 | modo = 'BASE'
25 | with self.openDSFile(fileName) as f:
26 | for line in f:
27 | nRec = nRec + 1
28 |
29 | if self.isComment(line) or record.isBlankLine(line):
30 | continue
31 | if self.isTableEnd(line):
32 | modo = 'MODIF'
33 | continue
34 | if record.isEOF(line):
35 | break
36 |
37 | if modo == 'BASE':
38 | self.getTable('Base').parseLine(line)
39 | elif modo == 'MODIF':
40 | self.getTable('Modif').parseLine(line)
41 | f.close()
42 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/dessem.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 5 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class dessem(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'dessem.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isComment(line):
23 | continue
24 | self.getTable('Arq').parseLine(line)
25 | f.close()
26 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/eletbase.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 5 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class eletbase(dsFile):
10 | def __init__(self, muda=False):
11 | self.muda = muda
12 | dsFile.__init__(self)
13 |
14 | def _dsFile__getConfig(self):
15 | if self.muda:
16 | return {'xml': 'eletmodif.xml'}
17 | return {'xml': 'eletbase.xml'}
18 |
19 | def newVersionDBAR(self):
20 | t = self.getTable('DBAR')
21 | t.setRange('idBarra', [1, 4])
22 | t.setRange('nivelTensao', [9, 9])
23 | t.setRange('nomeBarra', [10, 21])
24 | t.setRange('angTensao', [27, 30])
25 | t.setRange('potAtiva', [31, 35])
26 | t.setRange('cargaAtiva', [56, 60])
27 | t.setRange('idArea', [71, 72])
28 | t.setRange('idSistema', [77, 78])
29 |
30 | def newVersionDLIN(self):
31 | t = self.getTable('DLIN')
32 | t.setRange('idBarraOrig', [1, 5])
33 | t.setRange('codOper', [8, 8])
34 | t.setRange('idBarraDest', [11, 15])
35 | t.setRange('idCircuito', [16, 17])
36 | t.setRange('flagDelisga', [18, 18])
37 | t.setRange('refArea', [19, 19])
38 | t.setRange('resistencia', [21, 26])
39 | t.setRange('reatancia', [27, 32])
40 | t.setRange('tapNominal', [39, 43])
41 | t.setRange('angDefasagem', [54, 58])
42 | t.setRange('capFluxoNorm', [65, 68])
43 | t.setRange('capFluxoEmerg', [69, 72])
44 | t.setRange('flagViolacao', [97, 97])
45 | t.setRange('flagPerdas', [99, 99])
46 |
47 | def newVersionDCSC(self):
48 | t = self.getTable('DCSC')
49 | t.setRange('idBarraOrig', [1, 5])
50 | t.setRange('codOper', [8, 8])
51 | t.setRange('idBarraDest', [11, 15])
52 | t.setRange('idCircuito', [16, 17])
53 | t.setRange('reatancia', [38, 43])
54 |
55 | def newVersionDARE(self):
56 | t = self.getTable('DARE')
57 | t.setRange('idArea', [1, 3])
58 | t.setRange('nomeArea', [19, 54])
59 |
60 | def newVersionDANC(self):
61 | t = self.getTable('DANC')
62 | t.setRange('idArea', [1, 3])
63 | t.setRange('fatorCarga', [5, 10])
64 |
65 | def newVersionDGBT(self):
66 | t = self.getTable('DGBT')
67 | t.setRange('nivelTensao', [1, 2])
68 | t.setRange('tensaoNominal', [4, 8])
69 |
70 | def newVersionDUSI(self):
71 | self.getTable('DUSI').setRange('idBarra', [7, 11])
72 |
73 | def newVersionDREF(self):
74 | self.getTable('DREF_comp').setRange('idBarra', [6, 10])
75 | self.getTable('DREF_comp').setRange('idBarraDest', [11, 15])
76 |
77 | def isComment(self, line):
78 | return line[0] == '('
79 |
80 | def isFimBloco(self, line):
81 | return record.assertString(line, '9999') or record.assertString(line, '99999')
82 |
83 | def detectMode(self, line, sufix):
84 | for name in self.records:
85 | if record.assertString(line, name + sufix):
86 | return name
87 | for name in self.tables:
88 | if record.assertString(line, name + sufix):
89 | return name
90 | return None
91 |
92 | def readLine(self, line, mode):
93 | if mode == 'DREF':
94 | if line.startswith('RESP'):
95 | self.getTable('DREF').parseLine(line)
96 | else:
97 | self.getTable('DREF_comp').parseLine(line)
98 | self.getTable('DREF_comp').setField('idRestr', self.getTable('DREF').getField('idRestr'))
99 | elif mode in self.records:
100 | self.getRec(mode).parse(line)
101 | elif mode in self.tables:
102 | self.getTable(mode).parseLine(line)
103 |
104 | def readDSFile(self, fileName):
105 | nRec = 0
106 | modo = None
107 | sufix = ' MUDA' if self.muda else ''
108 |
109 | # Aparentemente esta trocada a especificacao da versao antiga com a nova no manual
110 | #self.newVersionDBAR()
111 | self.newVersionDLIN()
112 | self.newVersionDCSC()
113 | self.newVersionDARE()
114 | self.newVersionDANC()
115 | self.newVersionDGBT()
116 | self.newVersionDUSI()
117 | self.newVersionDREF()
118 |
119 | with self.openDSFile(fileName) as f:
120 | for line in f:
121 | nRec = nRec + 1
122 |
123 | if self.isComment(line) or record.isBlankLine(line):
124 | continue
125 | if self.isFimBloco(line):
126 | modo = None
127 | continue
128 |
129 | m = self.detectMode(line, sufix)
130 | if m is not None:
131 | modo = m
132 | else:
133 | try:
134 | self.readLine(line, modo)
135 | except ValueError:
136 | print('Record: {:s}'.format(modo))
137 | raise
138 | f.close()
139 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/entdados.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 6 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class entdados(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'entdados.xml'}
15 |
16 | def readLine(self, line):
17 | ls = line.strip()
18 |
19 | for size in range(6, 2, -1):
20 | name = ls[0:size-1]
21 |
22 | if name == 'META':
23 | meta = self.getRec('META').parse(line)
24 | if meta['nomeCampo2'] == 'CJSIST':
25 | self.getTable('META_CJSIST').parseLine(line)
26 | elif meta['nomeCampo2'] == 'RECEB':
27 | self.getTable('META_RECEB').parseLine(line)
28 | elif meta['nomeCampo2'] == 'GTER':
29 | self.getTable('META_GTER').parseLine(line)
30 | break
31 | elif name in self.records:
32 | self.getRec(name).parse(line)
33 | break
34 | elif name in self.tables:
35 | self.getTable(name).parseLine(line)
36 | break
37 | elif name in ['CI', 'CE']:
38 | self.getTable('CICE').parseLine(line)
39 |
40 | def readDSFile(self, fileName):
41 | nRec = 0
42 | with self.openDSFile(fileName) as f:
43 | for line in f:
44 | nRec = nRec + 1
45 |
46 | if record.isComment(line) or record.isBlankLine(line):
47 | continue
48 |
49 | self.readLine(line)
50 | f.close()
51 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/hidr.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 7 de ago de 2018
3 |
4 | @author: Renan Maciel
5 | '''
6 | import struct
7 | import logging
8 |
9 | class HIDR():
10 | def __init__(self):
11 | self.keyList = ['idUsina']
12 | self.lines = []
13 | self.recFilter = None
14 |
15 | def isEmpty(self):
16 | return len(self.lines) == 0
17 |
18 | def clearData(self):
19 | self.lines = []
20 |
21 | def listRecords(self):
22 | return ['UHE']
23 |
24 | def setRecFilter(self, recList):
25 | self.recFilter = recList
26 |
27 | def filterRec(self, r):
28 | if self.recFilter is None:
29 | return True
30 | if r in self.recFilter:
31 | return True
32 | return False
33 |
34 | def toDict(self):
35 | if not self.filterRec('UHE'):
36 | return {}
37 | lst = []
38 | for ln in self.lines:
39 | ds = {}
40 | for k in self.keyList:
41 | ds[k] = ln[k]
42 | lst.append(ds)
43 | return {'UHE': lst}
44 |
45 | def search(self, fileName, nomeUsina):
46 | c=1
47 | with open(fileName, 'rb') as f:
48 | while(self.readLine(f)):
49 | if self.currLine['nome'].find(nomeUsina) >= 0:
50 | self.currLine['idUsina'] = c
51 | self.lines.append(self.currLine)
52 | c=c+1
53 | f.close()
54 |
55 | def readDSFile(self, fileName, lines=1e6):
56 | c=1
57 | with open(fileName, 'rb') as f:
58 | while(self.readLine(f)):
59 | self.currLine['idUsina'] = c
60 | if self.currLine['idPosto'] > 0:
61 | self.lines.append(self.currLine)
62 | c=c+1
63 | if c >= lines:
64 | break
65 | f.close()
66 |
67 | def getField(self, k, default):
68 | f = self.currLine.get(k)
69 | if f is None: return default
70 | return f
71 |
72 | def readField(self, f, k, t, size):
73 | b = f.read(size)
74 | if b == '' or len(b) 3 and line[:3] == 'MAX':
34 | self.getRec('VazoesMax').parse(line)
35 | else:
36 | self.getTable('Vazoes').parseLine(line)
37 | f.close()
38 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/infofcf.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 12 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class infofcf(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'infofcf.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isBlankLine(line) or record.isComment(line):
23 | continue
24 | if record.isEOF(line) or record.isComment(line):
25 | break
26 |
27 | r = self.getRec('Geral').parse(line)
28 | if r['nomeCampo'] == 'MAPFCF':
29 | m = self.getRec('MAPFCF').parse(line)
30 | if m['nomeDado'] == 'SISGNL':
31 | self.getTable('SISGNL').parseLine(line)
32 | elif m['nomeDado'] == 'DURPAT':
33 | self.getTable('DURPAT').parseLine(line)
34 | elif m['nomeDado'] == 'TVIAG':
35 | self.getTable('TVIAG').parseLine(line)
36 | elif m['nomeDado'] == 'CGTMIN':
37 | self.getTable('CGTMIN').parseLine(line)
38 | elif r['nomeCampo'] == 'FCFFIX':
39 | self.getTable('FCFFIX').parseLine(line)
40 |
41 | f.close()
42 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/operuh.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 4 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class operuh(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'operuh.xml'}
15 |
16 | def readLine(self, line):
17 | r = self.getRec('Restr').parse(line)
18 | if r['nomeCampo'] != 'OPERUH':
19 | raise ValueError('Campo invalido')
20 |
21 | if r['nomeRestr'] == 'REST':
22 | self.getTable('REST').parseLine(line)
23 | elif r['nomeRestr'] == 'ELEM':
24 | self.getTable('ELEM').parseLine(line)
25 | elif r['nomeRestr'] == 'LIM':
26 | self.getTable('LIM').parseLine(line)
27 | elif r['nomeRestr'] == 'VAR':
28 | self.getTable('VAR').parseLine(line)
29 |
30 | def readDSFile(self, fileName):
31 | nRec = 0
32 | with self.openDSFile(fileName) as f:
33 | for line in f:
34 | nRec = nRec + 1
35 | if record.isComment(line) or record.isBlankLine(line):
36 | continue
37 | self.readLine(line)
38 | f.close()
39 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/operut.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 4 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class operut(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | self.ucterm = False
15 | self.flgucterm = False
16 | return {'xml': 'operut.xml'}
17 |
18 | def isEndOfBlock(self, line):
19 | return record.assertString(line, 'FIM')
20 |
21 | def readDSFile(self, fileName):
22 | nRec = 0
23 | modo = None
24 | with self.openDSFile(fileName) as f:
25 | for line in f:
26 | nRec = nRec + 1
27 |
28 | if record.isComment(line) or record.isBlankLine(line):
29 | continue
30 | if self.isEndOfBlock(line):
31 | modo = None
32 |
33 | if record.assertString(line, 'UCTERM'):
34 | self.ucterm = True
35 | elif record.assertString(line, 'FLGUCTERM'):
36 | self.flgucterm = True
37 | elif record.assertString(line, 'INIT'):
38 | modo = 'INIT'
39 | elif record.assertString(line, 'OPER'):
40 | modo = 'OPER'
41 | elif modo == 'INIT':
42 | self.getTable('INIT').parseLine(line)
43 | elif modo == 'OPER':
44 | self.getTable('OPER').parseLine(line)
45 | f.close()
46 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/out/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/cfg/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/out/cfg/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/cfg/modif/19_0.json:
--------------------------------------------------------------------------------
1 | {
2 | "pdo_operacao": {
3 | "2": {
4 | "name": "vazoes",
5 | "fields": [
6 | {
7 | "name": "idUhe",
8 | "type": "int",
9 | "c": 0,
10 | "cf": 4
11 | },
12 | {
13 | "name": "nome",
14 | "type": "string",
15 | "c": 4,
16 | "cf": 19
17 | },
18 | {
19 | "name": "subSistema",
20 | "type": "string",
21 | "c": 19,
22 | "cf": 22
23 | },
24 | {
25 | "name": "vazIncr",
26 | "type": "real",
27 | "c": 22,
28 | "cf": 30,
29 | "unit": "m3/s",
30 | "desc": "Vazão Afluente Incremental"
31 | },
32 | {
33 | "name": "vazMontV",
34 | "type": "real",
35 | "c": 30,
36 | "cf": 43,
37 | "unit": "m3/s",
38 | "desc": "Vazão Defluente a Montante V."
39 | },
40 | {
41 | "name": "vazMont",
42 | "type": "real",
43 | "c": 43,
44 | "cf": 52,
45 | "unit": "m3/s",
46 | "desc": "Vazão Defluente a Montante"
47 | },
48 | {
49 | "name": "vazTurb",
50 | "type": "real",
51 | "c": 52,
52 | "cf": 61,
53 | "unit": "m3/s",
54 | "desc": "Vazão Turbinada"
55 | },
56 | {
57 | "name": "vazVert",
58 | "type": "real",
59 | "c": 61,
60 | "cf": 71,
61 | "unit": "m3/s",
62 | "desc": "Vazão Vertida"
63 | },
64 | {
65 | "name": "vazDesv",
66 | "type": "real",
67 | "c": 71,
68 | "cf": 80,
69 | "unit": "m3/s",
70 | "desc": "Vazão Desviada"
71 | },
72 | {
73 | "name": "vazDesc",
74 | "type": "real",
75 | "c": 80,
76 | "cf": 89,
77 | "unit": "m3/s",
78 | "desc": "Vazão de Descarga"
79 | },
80 | {
81 | "name": "vazAlt",
82 | "type": "real",
83 | "c": 89,
84 | "cf": 98,
85 | "unit": "m3/s",
86 | "desc": "Vazão Alt"
87 | },
88 | {
89 | "name": "vazBomb",
90 | "type": "real",
91 | "c": 98,
92 | "cf": 107,
93 | "unit": "m3/s",
94 | "desc": "Vazão Bombeada"
95 | },
96 | {
97 | "name": "defMin",
98 | "type": "real",
99 | "c": 107,
100 | "cf": 116,
101 | "unit": "m3/s",
102 | "desc": "Defluência Mínima"
103 | },
104 | {
105 | "name": "defMax",
106 | "type": "real",
107 | "c": 116,
108 | "cf": 125,
109 | "unit": "m3/s",
110 | "desc": "Defluência Máxima"
111 | }
112 | ]
113 | }
114 | }
115 | }
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/cfg/modif/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/out/cfg/modif/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/cfg/pdo_sist.json:
--------------------------------------------------------------------------------
1 | {
2 | "pdo_sist": [
3 | {
4 | "name": "intervalo",
5 | "header": "iper",
6 | "type": "int"
7 | },
8 | {
9 | "name": "patamar",
10 | "header": "pat",
11 | "type": "string"
12 | },
13 | {
14 | "name": "subSistema",
15 | "header": "sist",
16 | "type": "string"
17 | },
18 | {
19 | "name": "cmo",
20 | "type": "real",
21 | "unit": "$/MWH",
22 | "desc": "Custo Marginal de Operação"
23 | },
24 | {
25 | "name": "demanda",
26 | "type": "real",
27 | "unit": "MW",
28 | "desc": "Demanda de Energia"
29 | },
30 | {
31 | "name": "perdas",
32 | "type": "real",
33 | "unit": "MW",
34 | "desc": "Perdas na Rede Elétrica"
35 | },
36 | {
37 | "name": "gerPeq",
38 | "header": "GpQusi",
39 | "type": "real",
40 | "unit": "MW",
41 | "desc": "Geração de Usinas não Simuladas"
42 | },
43 | {
44 | "name": "gerFixBar",
45 | "header": "GfixBar",
46 | "type": "real",
47 | "unit": "MW",
48 | "desc": "Geração FixBar"
49 | },
50 | {
51 | "name": "gerRenova",
52 | "header": "Grenova",
53 | "type": "real",
54 | "unit": "MW",
55 | "desc": "Geração de Fontes Renováveis"
56 | },
57 | {
58 | "name": "gerHidrTot",
59 | "header": "SomatGH",
60 | "type": "real",
61 | "unit": "MW",
62 | "desc": "Geração Hidrelétrica Total",
63 | "short_desc": "geracao_hidraulica"
64 | },
65 | {
66 | "name": "gerTermTot",
67 | "header": "SomatGT",
68 | "type": "real",
69 | "unit": "MW",
70 | "desc": "Geração Termelétrica Total",
71 | "short_desc": "geracao_termica"
72 | },
73 | {
74 | "name": "consumoBomb",
75 | "header": "ConsEleva",
76 | "type": "real",
77 | "unit": "MW",
78 | "desc": "Consumo com Bombeamento"
79 | },
80 | {
81 | "name": "importacao",
82 | "header": "Import",
83 | "type": "real",
84 | "unit": "MW",
85 | "desc": "Importação de Energia"
86 | },
87 | {
88 | "name": "exportacao",
89 | "header": "Export",
90 | "type": "real",
91 | "unit": "MW",
92 | "desc": "Exportação de Energia"
93 | },
94 | {
95 | "name": "deficit",
96 | "header": "CortCarg",
97 | "type": "real",
98 | "unit": "MW",
99 | "desc": "Déficit de Energia"
100 | },
101 | {
102 | "name": "saldo",
103 | "type": "real",
104 | "unit": "MW",
105 | "desc": "Saldo de Energia"
106 | },
107 | {
108 | "name": "recebimento",
109 | "type": "real",
110 | "unit": "MW",
111 | "desc": "Recebimento de Energia"
112 | },
113 | {
114 | "name": "gerTermMinTot",
115 | "header": "SomaGTMin",
116 | "type": "real",
117 | "unit": "MW",
118 | "desc": "Geração Termelétrica Mínima Total"
119 | },
120 | {
121 | "name": "gerTermMaxTot",
122 | "header": "SomatGTMax",
123 | "type": "real",
124 | "unit": "MW",
125 | "desc": "Geração Termelétrica Máxima Total"
126 | },
127 | {
128 | "name": "earm",
129 | "type": "real",
130 | "unit": "MWH",
131 | "desc": "Energia Armazenada"
132 | }
133 | ]
134 | }
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/pdo_base.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.out.pdo_common import parseValue
2 | from deckparser.importers.dessem.out import cfg
3 | import json
4 | import os
5 | import io
6 |
7 | class ColumnDef:
8 | def __init__(self, f):
9 | self.raw = f
10 | self.name = f['name']
11 | self.header = f.get('header')
12 | self.type = f['type']
13 | self.unit = f.get('unit')
14 | self.desc = f.get('desc')
15 | self.short_desc = f.get('short_desc')
16 |
17 | class TableDef:
18 | def __init__(self, fields):
19 | self.fields = fields
20 |
21 | def numOfCols(self):
22 | return len(self.fields)
23 |
24 | def searchColDef(self, nm):
25 | for f in self.fields:
26 | if f['name'] == nm:
27 | return ColumnDef(f)
28 |
29 | def getColByHeader(self, h):
30 | h = h.strip(' .')
31 | for f in self.fields:
32 | if self.matchHeader(f.get('header'), h) or self.matchHeader(f['name'], h):
33 | return ColumnDef(f)
34 |
35 | def matchHeader(self, hd, h):
36 | if hd is None:
37 | return False
38 | hd = hd.lower()
39 | h = h.lower()
40 | if h == hd:
41 | return True
42 | #if h.index(hd) == 0:
43 | # return True
44 |
45 | def getColDef(self, i):
46 | return ColumnDef(self.fields[i])
47 |
48 | class pdo_base:
49 | def __init__(self, tableName):
50 | self.tableName = tableName
51 | self.tableDef = self.loadTableDef()
52 | self.data = []
53 | self.header = []
54 |
55 | def loadTableDef(self):
56 | fPath = os.path.join(cfg.__path__[0], self.tableName+'.json')
57 | with io.open(fPath, 'r', encoding='utf8') as fp:
58 | try:
59 | d = json.load(fp, encoding='utf8')
60 | except:
61 | with open(fPath, encoding='utf-8') as fh:
62 | d = json.load(fh)
63 |
64 | fp.close()
65 | return TableDef(d[self.tableName])
66 |
67 | def readHeaderLine(self, line):
68 | hd = [h.strip() for h in self.splitLine(line)]
69 | self.header.append(hd)
70 |
71 | def composeHeaderConfig(self):
72 | self.hd_config = {}
73 | for hd in self.header:
74 | for i in range(len(hd)):
75 | cd = self.tableDef.getColByHeader(hd[i])
76 | if cd:
77 | if i in self.hd_config.keys():
78 | raise Exception('Column conflict: (field: {:s}, column1: {:s}, column2: {:s})'.format(hd[i], self.hd_config[i], cd))
79 | self.hd_config[i] = cd
80 |
81 | def readDataLine(self, line):
82 | rdt = self.splitLine(line)
83 | dt = {}
84 | for i in range(len(rdt)):
85 | cd = self.hd_config.get(i)
86 | if cd:
87 | dt[cd.name] = parseValue(rdt[i], cd.type)
88 |
89 | self.data.append(dt)
90 |
91 | def splitLine(self, line):
92 | dt = line.split(';')
93 | dt = [d.strip() for d in dt]
94 | return dt
95 |
96 | def checkHeaderLimit(self, ln):
97 | if ln == '':
98 | return False
99 | for c in ln:
100 | if c not in ['-', ';']:
101 | return False
102 | return True
103 |
104 | def openDSOFile(self, fn):
105 | return open(fn, 'r', encoding='iso-8859-1')
106 |
107 | def readFile(self, fileName):
108 | modo = None
109 | with self.openDSOFile(fileName) as f:
110 | for line in f.readlines():
111 | line = line.strip()
112 | if self.checkHeaderLimit(line):
113 | if not modo:
114 | modo = 'header'
115 | elif modo == 'header':
116 | self.composeHeaderConfig()
117 | modo = 'data'
118 | elif modo == 'header':
119 | self.readHeaderLine(line)
120 | elif modo == 'data':
121 | self.readDataLine(line)
122 |
123 | f.close()
124 |
125 | def export(self):
126 | return self.data
127 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/pdo_common.py:
--------------------------------------------------------------------------------
1 |
2 | def parseValue(v, field_type):
3 | v = v.strip()
4 | if v in ['-', '']:
5 | return None
6 | if field_type == 'int':
7 | return int(v)
8 | if field_type == 'real':
9 | try:
10 | return float(v)
11 | except ValueError:
12 | if checkInf(v):
13 | return float('inf')
14 | else:
15 | raise
16 | if field_type == 'string':
17 | return v
18 |
19 | def checkInf(v):
20 | for vc in v:
21 | if vc != '*':
22 | return False
23 | return True
24 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/pdo_operacao.py:
--------------------------------------------------------------------------------
1 | import re
2 | from datetime import datetime, timedelta
3 | from deckparser.importers.dessem.out.pdo_base_oper import pdo_base_oper
4 |
5 | def init_datetime(y, m, d, h, mn):
6 | delta_d = 0
7 | if h == 24:
8 | h = 0
9 | delta_d = 1
10 | return datetime(y,m,d,h,mn) + timedelta(days=delta_d)
11 |
12 | class pdo_operacao(pdo_base_oper):
13 | def __init__(self):
14 | super().__init__('pdo_operacao')
15 | self.addBlockType('interval')
16 |
17 | def checkOpenBlock_interval(self, line):
18 | dtrex = '(\d{2})\/(\d{2})\/(\d{4}) - (\d{2})\:(\d{2})'
19 | rex = 'PERIODO:\s*(\d*) - '+dtrex+' a '+dtrex
20 | m = re.match(rex, line)
21 | if not m:
22 | return False
23 | d = int(m.group(1))
24 | dts = init_datetime(int(m.group(4)),
25 | int(m.group(3)),
26 | int(m.group(2)),
27 | int(m.group(5)),
28 | int(m.group(6)))
29 | dte = init_datetime(int(m.group(9)),
30 | int(m.group(8)),
31 | int(m.group(7)),
32 | int(m.group(10)),
33 | int(m.group(11)))
34 | dtf = '%Y-%m-%dT%H:%M'
35 | bidx = [dts.strftime(dtf), dte.strftime(dtf)]
36 | self.setOpenBlock('interval', d)
37 | self.setBlockIndex(bidx)
38 | return True
39 |
40 | def readFile(self, fileName):
41 | modo = None
42 | with self.openFile(fileName) as f:
43 | for line in f:
44 | ln = line.strip()
45 | if self.checkOpenBlock_interval(ln):
46 | modo = None
47 | if self.checkOpenTable(ln):
48 | if self.openTableKey == '9':
49 | modo = 'data'
50 | else:
51 | modo = 'header'
52 | elif modo == 'header':
53 | if self.checkHeaderLimit(ln):
54 | modo = 'data'
55 | else:
56 | self.readHeaderLine(line)
57 | elif modo == 'data':
58 | if self.checkEndOfTable(ln):
59 | modo = None
60 | else:
61 | self.readDataLine(line)
62 |
63 | f.close()
64 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/pdo_sist.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.out.pdo_base import pdo_base
2 |
3 | class pdo_sist(pdo_base):
4 | def __init__(self):
5 | super().__init__('pdo_sist')
6 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/pdo_sumaoper.py:
--------------------------------------------------------------------------------
1 | import re
2 | from datetime import date
3 | from deckparser.importers.dessem.out.pdo_base_oper import pdo_base_oper
4 |
5 | class pdo_sumaoper(pdo_base_oper):
6 | def __init__(self):
7 | super().__init__('pdo_sumaoper')
8 | self.addBlockType('day')
9 | self.addBlockType('week')
10 |
11 | def checkOpenBlock_day(self, line):
12 | dtrex = '(\d{2})\/(\d{2})\/(\d{4})'
13 | rex = '\* RELATORIO FINAL DE OPERACAO NO DIA\s*(\d*) \: '+dtrex+'\s*\*'
14 | m = re.match(rex, line)
15 | if not m:
16 | return False
17 | d = int(m.group(1))
18 | dt = date(int(m.group(4)),
19 | int(m.group(3)),
20 | int(m.group(2)))
21 | self.setOpenBlock('day', d)
22 | self.setBlockIndex(dt.strftime('%Y-%m-%d'))
23 | return True
24 |
25 | def checkOpenBlock_week(self, line):
26 | rex = '\* RELATORIO FINAL DE OPERACAO NA SEMANA (\d*) \*'
27 | m = re.match(rex, line)
28 | if not m:
29 | return False
30 | w = int(m.group(1))
31 | self.setOpenBlock('week', w)
32 | return True
33 |
34 | def readWeekPeriod(self, line):
35 | dtrex = '(\d{2})\/(\d{2})\/(\d{4})'
36 | rex = '\* PERIODO : '+dtrex+' a '+dtrex+'\s*\*'
37 | m = re.match(rex, line)
38 | if not m:
39 | return False
40 | dt1 = date(int(m.group(3)),
41 | int(m.group(2)),
42 | int(m.group(1)))
43 | dt2 = date(int(m.group(6)),
44 | int(m.group(5)),
45 | int(m.group(4)))
46 | dtf = '%Y-%m-%d'
47 | self.setBlockIndex([dt1.strftime(dtf), dt2.strftime(dtf)])
48 | return True
49 |
50 | def readFile(self, fileName):
51 | modo = None
52 | with self.openFile(fileName) as f:
53 | for line in f:
54 | ln = line.strip()
55 | if self.checkOpenBlock_day(ln):
56 | modo = None
57 | elif self.checkOpenBlock_week(ln):
58 | modo = 'week_period'
59 | elif self.checkOpenTable(ln):
60 | if self.openTableKey == '9':
61 | modo = 'data'
62 | else:
63 | modo = 'header'
64 | elif self.checkHeaderLimit(ln):
65 | modo = 'data'
66 | elif modo == 'week_period':
67 | if self.readWeekPeriod(line):
68 | modo = None
69 | elif modo == 'header':
70 | self.readHeaderLine(line)
71 | elif modo == 'data':
72 | if self.checkEndOfTable(ln):
73 | modo = None
74 | else:
75 | self.readDataLine(line)
76 |
77 | f.close()
78 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/out/result_loader.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.out.pdo_sist import pdo_sist
2 | from deckparser.importers.dessem.out.pdo_operacao import pdo_operacao
3 | from deckparser.importers.dessem.out.pdo_sumaoper import pdo_sumaoper
4 | import logging
5 | import os
6 | import re
7 |
8 | class ResultLoader:
9 | def __init__(self, dirDS=None, fileEncoding=None):
10 | self.dirDS = dirDS
11 | self.fileEncoding = fileEncoding
12 | self.file_filter = None
13 | self.init()
14 |
15 | ''' Inicializa as instancias dos importers '''
16 | def init(self):
17 | self.getLogger().info('Loading configuration')
18 | m = {}
19 | m['pdo_sist'] = pdo_sist()
20 | m['pdo_operacao'] = pdo_operacao()
21 | m['pdo_sumaoper'] = pdo_sumaoper()
22 | self.resultLoaders = m
23 |
24 | @staticmethod
25 | def required_files():
26 | return ['pdo_operacao']
27 |
28 | @classmethod
29 | def all_files(cls):
30 | return cls.required_files() + ['pdo_sist','pdo_sumaoper']
31 |
32 | def prepare(self):
33 | dessem_version = self.loadDessemVersion()
34 | self.getLogger().info('Preparing configuration (DESSEM version {:s})'.format(str(dessem_version)))
35 | for fk in ['pdo_sumaoper','pdo_operacao']:
36 | self.resultLoaders[fk].applyModif(dessem_version)
37 |
38 | def openFile(self, fn):
39 | fp = os.path.join(self.dirDS, fn)
40 | return open(fp, 'r', encoding='iso-8859-1')
41 |
42 | def loadDessemVersion(self):
43 | for fk in ['pdo_sist','pdo_operacao']:
44 | fn = self.__get_matching_filename(fk)
45 | if not fn:
46 | continue
47 | with self.openFile(fn) as fp:
48 | for ln in fp:
49 | v = self.__readDessemVersion(ln)
50 | if v:
51 | return v
52 |
53 | def __readDessemVersion(self, ln):
54 | rex = ".*VERSAO\s*([0-9]{1,2})(\.[0-9]{1,2}){0,1}.*"
55 | m = re.match(rex, ln)
56 | if m:
57 | v = [int(v_.strip('.')) if v_ else 0 for v_ in m.groups()]
58 | return tuple(v)
59 |
60 | def listFiles(self):
61 | return list(self.resultLoaders.keys())
62 |
63 | def setDirDS(self, dirDS):
64 | self.dirDS = dirDS
65 |
66 | def getFileList(self):
67 | return [f for f in self.resultLoaders.keys() if self.filterFile(f)]
68 |
69 | def get(self, fileType):
70 | return self.resultLoaders.get(fileType)
71 |
72 | def setFileFilter(self, file_filter):
73 | self.file_filter = file_filter
74 |
75 | def filterFile(self, f):
76 | ff = self.file_filter
77 | if ff is None:
78 | return True
79 | if f in ff:
80 | return True
81 | return False
82 |
83 | def loadAll(self):
84 | self.prepare()
85 | for f in self.resultLoaders:
86 | if self.filterFile(f):
87 | self.load(f)
88 |
89 | def __get_matching_filename(self, fk):
90 | for f in os.listdir(self.dirDS):
91 | if f.lower() == fk.lower() + '.dat':
92 | return f
93 |
94 | def load(self, fk):
95 | fn = self.__get_matching_filename(fk)
96 | if not fn:
97 | self.getLogger().warn('Missing file: %s', str(fk))
98 | return
99 | fp = os.path.join(self.dirDS, fn)
100 | try:
101 | self.resultLoaders[fk].readFile(fp)
102 | except FileNotFoundError:
103 | self.getLogger().warn('Missing file: %s', str(fp))
104 |
105 | def getData(self, fmt=None):
106 | dd = {}
107 | for f in self.resultLoaders:
108 | if not self.filterFile(f):
109 | continue
110 | ds = self.resultLoaders[f]
111 | if fmt == 'dict':
112 | dd[f] = ds.export()
113 | else:
114 | dd[f] = ds
115 | return dd
116 |
117 | def getLogger(self):
118 | return logging.getLogger(__name__)
119 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/ptoper.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 5 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class ptoper(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'ptoper.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isComment(line) or record.isBlankLine(line):
23 | continue
24 | self.getTable('PTOPER').parseLine(line)
25 | f.close()
26 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/rampas.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.core.dsFile import dsFile
2 | from deckparser.importers.dessem.core.record import record
3 |
4 | class rampas(dsFile):
5 | def __init__(self):
6 | dsFile.__init__(self)
7 |
8 | def _dsFile__getConfig(self):
9 | return {'xml': 'rampas.xml'}
10 |
11 | def isEndOfBlock(self, line):
12 | return record.assertString(line, 'FIM')
13 |
14 | def readDSFile(self, fileName):
15 | nRec = 0
16 | modo = None
17 | with self.openDSFile(fileName) as f:
18 | for line in f:
19 | nRec = nRec + 1
20 |
21 | if record.isComment(line) or record.isBlankLine(line):
22 | continue
23 | if self.isEndOfBlock(line):
24 | break
25 | if modo:
26 | self.getTable('RAMP').parseLine(line)
27 | if record.assertString(line, 'RAMP'):
28 | modo = True
29 | f.close()
30 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/renovaveis.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.core.dsFile import dsFile
2 | from deckparser.importers.dessem.core.record import record
3 |
4 | class renovaveis(dsFile):
5 |
6 | def _dsFile__getConfig(self):
7 | return {'xml': 'renovaveis.xml'}
8 |
9 | def readLine(self, line):
10 | ln = line.strip()
11 | if ln.startswith('EOLICASUBM'):
12 | self.getTable('EOLICASUBM').parseLine(line)
13 | elif ln.startswith('EOLICA-GERACAO'):
14 | self.getTable('EOLICA-GERACAO').parseLine(line)
15 | elif ln.startswith('EOLICABARRA'):
16 | self.getTable('EOLICABARRA').parseLine(line)
17 | elif ln.startswith('EOLICA'):
18 | self.getTable('EOLICA').parseLine(line)
19 |
20 | def readDSFile(self, fileName):
21 | with self.openDSFile(fileName) as f:
22 | for line in f:
23 | if record.isComment(line) or record.isBlankLine(line):
24 | continue
25 | self.readLine(line)
26 | f.close()
27 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/respot.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 4 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class respot(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'respot.xml'}
15 |
16 | def isEOF(self, line):
17 | return record.assertString(line, '9999')
18 |
19 | def isEndOfBlock(self, line):
20 | return record.assertString(line, 'FIM')
21 |
22 | def readDSFile(self, fileName):
23 | nRec = 0
24 | modo = None
25 | with self.openDSFile(fileName) as f:
26 | for line in f:
27 | nRec = nRec + 1
28 |
29 | if record.isComment(line) or record.isBlankLine(line):
30 | continue
31 | if self.isEndOfBlock(line):
32 | modo = None
33 | continue
34 | if self.isEOF(line):
35 | break
36 |
37 | if modo == 'USI':
38 | self.getTable('USI').parseLine(line)
39 | else:
40 | ls = line.strip()
41 | nc = ls[0:2]
42 | if nc == 'RP':
43 | self.getTable('RP').parseLine(line)
44 | elif nc == 'LM':
45 | self.getTable('LM').parseLine(line)
46 | elif ls[0:3] == 'USI':
47 | self.modo = 'USI'
48 |
49 | f.close()
50 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/restseg.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.core.dsFile import dsFile
2 | from deckparser.importers.dessem.core.record import record
3 |
4 | class restseg(dsFile):
5 | def __init__(self):
6 | dsFile.__init__(self)
7 |
8 | def _dsFile__getConfig(self):
9 | return {'xml': 'restseg.xml'}
10 |
11 | def detectTable(self, line):
12 | nc1,nc2 = line.split()[:2]
13 | if nc1 != 'TABSEG':
14 | return None
15 | return self.getTable(nc2)
16 |
17 | def readLine(self, line):
18 | t = self.detectTable(line)
19 | if t:
20 | t.parseLine(line)
21 |
22 | def readDSFile(self, fileName):
23 | nRec = 0
24 | with self.openDSFile(fileName) as f:
25 | for line in f:
26 | nRec = nRec + 1
27 | if record.isComment(line) or record.isBlankLine(line):
28 | continue
29 | self.readLine(line)
30 | f.close()
31 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/rstlpp.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.dessem.core.dsFile import dsFile
2 | from deckparser.importers.dessem.core.record import record
3 |
4 | class rstlpp(dsFile):
5 | def __init__(self):
6 | dsFile.__init__(self)
7 |
8 | def _dsFile__getConfig(self):
9 | return {'xml': 'rstlpp.xml'}
10 |
11 | def detectTable(self, line):
12 | nc = line.split()[0]
13 | return self.getTable(nc)
14 |
15 | def readLine(self, line):
16 | t = self.detectTable(line)
17 | if t:
18 | t.parseLine(line)
19 |
20 | def readDSFile(self, fileName):
21 | nRec = 0
22 | with self.openDSFile(fileName) as f:
23 | for line in f:
24 | nRec = nRec + 1
25 | if record.isComment(line) or record.isBlankLine(line):
26 | continue
27 | self.readLine(line)
28 | f.close()
29 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/simul.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 25 de out de 2018
3 |
4 | @author: Renan Maciel
5 | '''
6 |
7 | from deckparser.importers.dessem.core.dsFile import dsFile
8 | from deckparser.importers.dessem.core.record import record
9 |
10 | class simul(dsFile):
11 | def __init__(self):
12 | dsFile.__init__(self)
13 |
14 | def _dsFile__getConfig(self):
15 | return {'xml': 'simul.xml'}
16 |
17 | def endOfBlock(self, line):
18 | return record.assertString(line, 'FIM')
19 |
20 | def readDSFile(self, fileName):
21 | nRec = 0
22 | modo = None
23 | with self.openDSFile(fileName) as f:
24 | for line in f:
25 | nRec = nRec + 1
26 |
27 | if record.isEOF(line):
28 | break
29 | if self.endOfBlock(line):
30 | modo = None
31 | continue
32 | if nRec == 3:
33 | self.getRec('Cabecalho').parse(line)
34 |
35 | nc = line[0:5]
36 | if nc == 'DISC': modo = 'DISC'
37 | elif nc == 'VOLI': modo = 'VOLI'
38 | elif nc == 'OPER': modo = 'OPER'
39 |
40 | if modo is not None:
41 | self.getTable(modo).parseLine(line)
42 | f.close()
43 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/termdat.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 5 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class termdat(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'termdat.xml'}
15 |
16 | def readLine(self, line):
17 | r = self.getRec('Campo').parse(line)
18 |
19 | if r['nomeCampo'] == 'CADUSIT':
20 | self.getTable('CADUSIT').parseLine(line)
21 | elif r['nomeCampo'] == 'CADUNIDT':
22 | self.getTable('CADUNIDT').parseLine(line)
23 |
24 | def readDSFile(self, fileName):
25 | nRec = 0
26 | with self.openDSFile(fileName) as f:
27 | for line in f:
28 | nRec = nRec + 1
29 |
30 | if record.isComment(line) or record.isBlankLine(line):
31 | continue
32 | self.readLine(line)
33 | f.close()
34 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/tolperd.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 22 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 |
9 | class tolperd(dsFile):
10 | def __init__(self):
11 | dsFile.__init__(self)
12 |
13 | def _dsFile__getConfig(self):
14 | return {'xml': 'tolperd.xml'}
15 |
16 | def readDSFile(self, fileName):
17 | nRec = 0
18 | with self.openDSFile(fileName) as f:
19 | for line in f:
20 | nRec = nRec + 1
21 |
22 | if record.isEOF(line):
23 | break
24 | if record.isComment(line) or record.isBlankLine(line):
25 | continue
26 |
27 | nc = self.recGeral.parse(line)['nomeCampo']
28 | self.getTable(nc).parseLine(line)
29 | f.close()
30 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/util.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 23 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 |
7 | def printDict(d, lvm, lv=0):
8 | if lv <= lvm:
9 | if isinstance(d, list):
10 | for ln in d:
11 | printDict(ln,lvm,lv+1)
12 | elif isinstance(d, dict):
13 | for k in d:
14 | print('\t'*lv+str(k))
15 | printDict(d[k],lvm,lv+1)
16 | else:
17 | print('\t'*lv+str(d))
18 | else:
19 | print('\t'*lv+str(d))
20 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/util/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/util/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/util/pmo.py:
--------------------------------------------------------------------------------
1 | from dateutil.relativedelta import relativedelta
2 | from datetime import timedelta, date
3 |
4 | def isFirstOperativeDay(d): # sábado
5 | return d.weekday() == 5
6 |
7 | def firstOperativeDay(d):
8 | wd = d.weekday()
9 | if wd >= 5: # sábado e domingo
10 | delta_d = 5-wd
11 | else:
12 | delta_d = -(wd+2)
13 | return d + timedelta(days=delta_d)
14 |
15 | def first_pmo_date(pmo_month):
16 | first_day = pmo_month.replace(day=1)
17 | return firstOperativeDay(first_day)
18 |
19 | def pmo_date_range(pmo_month):
20 | next_month = pmo_month + relativedelta(months=1)
21 | return [first_pmo_date(pmo_month),
22 | first_pmo_date(next_month) + timedelta(days=-1)]
23 |
24 | def pmo_date_list(pmo_month):
25 | sd,ed = pmo_date_range(pmo_month)
26 | return [sd + timedelta(days=i) for i in range(0, (ed-sd).days + 1)]
27 |
28 | def get_pmo_month(deck_date):
29 | pmo_month = deck_date.replace(day=1)
30 | pmo_range = pmo_date_range(pmo_month)
31 | if pmo_range[0] <= deck_date <= pmo_range[1]:
32 | return pmo_month
33 | return pmo_month + relativedelta(months=1)
34 |
35 | def real_month(rv, d, m, y):
36 | m_date = date(y,m,1)
37 | if rv == 0 and d > 20:
38 | m_date -= relativedelta(months=1)
39 | elif rv > 3 and d < 10:
40 | m_date += relativedelta(months=1)
41 | return m_date
42 |
43 | def real_date(rv, d, m, y):
44 | m_date = real_month(rv, d, m, y)
45 | try:
46 | return m_date.replace(day=d)
47 | except:
48 | return None
49 | #raise ValueError('Invalid deck date: {:d}-{:d}-{:d} rv {:d}'.format(y,m,d,rv))
50 |
51 | # from datetime import date
52 | # import json
53 | # date_list = [date(2020,1,1) + timedelta(days=i) for i in range(365)]
54 | # for d in date_list:
55 | # print((d, get_pmo_month(d)))
56 |
57 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/v2/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/v2/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/v2/cfg/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/dessem/v2/cfg/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/dessem/v2/cfg/termdat.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
--------------------------------------------------------------------------------
/deckparser/importers/dessem/v2/termdat.py:
--------------------------------------------------------------------------------
1 | '''
2 | Created on 5 de jul de 2018
3 |
4 | @author: Renan
5 | '''
6 | from deckparser.importers.dessem.core.dsFile import dsFile
7 | from deckparser.importers.dessem.core.record import record
8 | import deckparser.importers.dessem.v2.cfg as cfg_v2
9 |
10 | class termdat(dsFile):
11 | def __init__(self):
12 | dsFile.__init__(self)
13 |
14 | def _dsFile__getConfig(self):
15 | return {'xml': 'termdat.xml'}
16 |
17 | def getConfigPath(self):
18 | return cfg_v2.__path__[0]
19 |
20 | def readLine(self, line):
21 | r = self.getRec('Campo').parse(line)
22 |
23 | if r['nomeCampo'] == 'CADUSIT':
24 | self.getTable('CADUSIT').parseLine(line)
25 | elif r['nomeCampo'] == 'CADUNIDT':
26 | self.getTable('CADUNIDT').parseLine(line)
27 | elif r['nomeCampo'] == 'CADCONF':
28 | self.getTable('CADCONF').parseLine(line)
29 | elif r['nomeCampo'] == 'CADMIN':
30 | self.getTable('CADMIN').parseLine(line)
31 |
32 | def readDSFile(self, fileName):
33 | nRec = 0
34 | with self.openDSFile(fileName) as f:
35 | for line in f:
36 | nRec = nRec + 1
37 |
38 | if record.isComment(line) or record.isBlankLine(line):
39 | continue
40 | self.readLine(line)
41 | f.close()
42 |
--------------------------------------------------------------------------------
/deckparser/importers/imputils.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def searchInList(listobj, argsearch):
4 | result = dict()
5 | search = argsearch.strip().lower()
6 | for nline, vline in enumerate(listobj):
7 | # if svline.rfind(argsearch) != -1:
8 | if search in vline.strip().lower():
9 | result['line'] = nline
10 | result['value'] = vline
11 | return result
12 |
13 |
14 | def getUpdateIndexes(mes_i, ano_i, mes_f, ano_f, dger):
15 | lista = list()
16 | tanoi = int(ano_i) - dger['yi']
17 | tmesi = int(mes_i)
18 | tanof = int(ano_f) - dger['yi']
19 | tmesf = int(mes_f)
20 | idx_ini = (tanoi * 12) + (12 - dger['mi']) - (12 - tmesi)
21 | idx_fim = (tanof * 12) + (12 - dger['mi']) - (12 - tmesf)
22 | if idx_ini == idx_fim:
23 | lista.append(idx_ini)
24 | else:
25 | lista = [x for x in range(idx_ini, idx_fim + 1)]
26 | return lista
27 |
28 |
29 | def line2list(dline, mi, ar, mf, bloco, vlista=list(), dger=None):
30 | if len(vlista) == 0:
31 | vlista = [0 for i in range(dger['ni'])]
32 | leituras = getUpdateIndexes(mi, ar, mf, ar, dger)
33 | posini = (int(mi) - 1) * bloco
34 | for n, value in enumerate(leituras):
35 | posfim = posini + ((n + 1) * bloco)
36 | if posfim > len(dline):
37 | posfim = len(dline) - 1
38 | posinimov = posfim - bloco
39 | vlista[value] = float(dline[posinimov:posfim].strip())
40 | return vlista
41 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/newave/__init__.py
--------------------------------------------------------------------------------
/deckparser/importers/newave/importCADIC.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def importCADIC(fdata, dger):
4 | CADIC = dict()
5 | # Indexador de linha em leitura
6 | liner = 2
7 | impc = -1
8 | if len(fdata) > 0:
9 | while fdata[liner].strip() != '999':
10 | vcheck = fdata[liner][:4].strip()
11 | if 0 < len(vcheck) < 4 and 'PRE' != vcheck and 'POS' != vcheck:
12 | vals = fdata[liner].split()
13 | subsis = vals[0]
14 | if subsis not in CADIC.keys():
15 | CADIC[subsis] = dict()
16 | pt1 = vals[1]
17 | if len(vals) > 2:
18 | pt2 = vals[2]
19 | else:
20 | pt2 = ''
21 | liner += 1
22 | impc += 1
23 | else:
24 | ano = fdata[liner][:4].strip()
25 | # PRE ainda será tratado
26 | if ano == '' or len(fdata[liner].strip()) <= 4:
27 | # PDE ou Leilao - Sem C_ADIC
28 | valores = [0.0] * dger['ni']
29 | CADIC[subsis][impc] = {'pt1': pt1, 'pt2': pt2, 'valores': valores.copy()}
30 | while fdata[liner][:4].strip() == '' or len(fdata[liner].strip()) <= 4 and fdata[liner].strip() != '999':
31 | liner += 1
32 | continue
33 | elif len(ano) == 4 and ano in str(dger['yph']):
34 | # Lendo os valores
35 | lvals = fdata[liner][7:].split()
36 | if int(ano) == dger['yi'] and dger['mi']>1 and len(lvals)==12:
37 | valores = lvals[dger['mi'] - 1:]
38 | else:
39 | valores = lvals
40 | if impc not in CADIC[subsis]:
41 | CADIC[subsis][impc] = {'pt1': pt1, 'pt2': pt2, 'valores': valores}
42 | else:
43 | CADIC[subsis][impc]['valores'] += valores.copy()
44 | liner = liner + 1
45 | return CADIC
46 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importCADTERM.py:
--------------------------------------------------------------------------------
1 | def importCADTERM(fobj):
2 | CADTERM = dict()
3 | if fobj:
4 | for i, rv in enumerate(fobj):
5 | v = rv.decode('utf-8')
6 | if (i > 1) and (v.strip() != "") and 'NUM' not in v and 'XXXXX' not in v:
7 | sstr = len(v[0:6].strip())
8 | if sstr == 3:
9 | # Linha de codigo de UTE - Tem zeros
10 | codute = str(int(v[0:6].strip()))
11 | elif sstr == 6:
12 | if codute not in CADTERM.keys():
13 | CADTERM[codute] = 1
14 | else:
15 | CADTERM[codute] += 1
16 | else:
17 | print('CADTERM not found.')
18 | return CADTERM
19 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importCAR.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.imputils import searchInList, getUpdateIndexes
2 |
3 | def importCAR(fdata,dger):
4 | CAR = dict()
5 | # Inicia a leitura do arquivo caso tenha conteudo
6 | if len(fdata) > 0 and fdata[0].strip() != '':
7 | strsearch = searchInList(fdata,'CURVA DE SEGURANCA')
8 | if strsearch['line']:
9 | linerefini = int(strsearch['line'] + 3)
10 | strsfim = searchInList(fdata,'PROCESSO ITERATIVO')
11 | linereffim = int(strsfim['line'])
12 | for line in range(linerefini,linereffim):
13 | if fdata[line][0:2] == ' ' and fdata[line].strip() != '':
14 | SUBSIS = fdata[line].strip()
15 | curva = list()
16 | curva = [ 0 for i in range(dger['ni']) ]
17 | elif fdata[line][0:4].strip() in str(dger['yph']):
18 | # Lendo a geracao de pequenas usinas
19 | anoleitura = int(fdata[line][0:4].strip())
20 | mesini = 1
21 | if anoleitura == dger['yi']:
22 | mesini = dger['mi']
23 | leituras = getUpdateIndexes(mesini,anoleitura,12,anoleitura,dger)
24 | dados = fdata[line][6:]
25 | posini = (int(mesini)-1)*6
26 | for n, value in enumerate(leituras):
27 | posfim = posini+((n+1)*6)
28 | posinimov = posfim-6
29 | curva[value] = float(dados[posinimov:posfim].strip())
30 | CAR[SUBSIS] = curva
31 | return CAR
32 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importCLAST.py:
--------------------------------------------------------------------------------
1 | def importCLAST(fobj, utes, nyears):
2 | CLAST = dict()
3 | ModifCLAST = dict()
4 | for i in utes:
5 | ModifCLAST[i] = list()
6 | secParte = 0
7 | for i, bv in enumerate(fobj):
8 | v = bv.decode('utf-8')
9 | if secParte == 0:
10 | if v[0:5] == " 9999":
11 | secParte = 1
12 | if (i > 1) and (v.strip() != "") and (v[0] == " ") and secParte == 0:
13 | CodUTE = v[0:6].strip()
14 | nomeclasse = v[6:19].strip()
15 | tipocomb = v[19:30].strip()
16 | tcusto = dict()
17 | inti = 30
18 | intf = 38
19 | for x in range(nyears):
20 | tcusto[x+1] = v[inti:intf].strip()
21 | inti = intf
22 | intf = intf + 8
23 | CLAST[CodUTE] = {'nomeclasse': nomeclasse, 'tipocomb': tipocomb.upper()}
24 | for k, cv in tcusto.items():
25 | CLAST[CodUTE]['custo' + str(k)] = cv
26 | else:
27 | if (v[0:4] != " NUM") and (v.strip() != "") and (v[0:5] != " XXXX"):
28 | CodUTE = v[0:6].strip()
29 | custo = v[7:16].strip()
30 | mesi = v[17:20].strip()
31 | anoi = v[20:25].strip()
32 | mesf = v[25:29].strip()
33 | anof = v[29:34].strip()
34 | ModifCLAST[CodUTE].append(
35 | {'custo': custo,
36 | 'mesi': mesi,
37 | 'anoi': anoi,
38 | 'mesf': mesf,
39 | 'anof': anof})
40 | return CLAST, ModifCLAST
41 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importCONFHD.py:
--------------------------------------------------------------------------------
1 | from collections import OrderedDict as odict
2 |
3 |
4 | def importCONFHD(fobj):
5 | CONFHD = odict()
6 | for i, v in enumerate(fobj):
7 | vline = v.decode('utf-8')
8 | if vline.strip() and vline[0:5].strip() != 'NUM' and 'XXXX' not in vline:
9 | coduhe = vline[:5].strip()
10 | nome = vline[5:20].strip()
11 | if nome == 'BELO MONTE C':
12 | nome = 'B.MONTE COMP'
13 | # POSTO JUS REE V.INIC U.EXIS MODIF INIC.HIST FIM HIST
14 | cols = vline[20:].split()
15 | posto = cols[0]
16 | jus = cols[1]
17 | ssis = cols[2]
18 | vinic = cols[3]
19 | uexis = cols[4]
20 | modif = cols[5]
21 | inihist = cols[6]
22 | fimhist = cols[7]
23 | desv_1 = None
24 | desv_2 = None
25 | try:
26 | desv_1 = cols[8]
27 | desv_2 = cols[9]
28 | # posto = vline[20:26].strip()
29 | # jus = vline[26:31].strip()
30 | # ssis = vline[31:36].strip()
31 | # vinic = vline[35:43].strip()
32 | # uexis = vline[42:49].strip()
33 | # modif = vline[48:56].strip()
34 | # inihist = vline[55:64].strip()
35 | # fimhist = vline[64:73].strip()
36 | # desv_1 = None
37 | # desv_2 = None
38 | # try:
39 | # if len(vline) > 73:
40 | # desv_1 = vline[72:78].strip()
41 | # desv_2 = vline[78:].strip()
42 | except Exception as e:
43 | pass
44 | CONFHD[coduhe] = {
45 | 'nome': nome,
46 | 'posto': posto,
47 | 'jus': jus,
48 | 'ssis': ssis,
49 | 'vinic': vinic,
50 | 'uexis': uexis,
51 | 'modif': modif,
52 | 'inihist': inihist,
53 | 'fimhist': fimhist,
54 | 'desv_1': desv_1,
55 | 'desv_2': desv_2}
56 | return CONFHD
57 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importCONFT.py:
--------------------------------------------------------------------------------
1 | def importCONFT(fobj):
2 | CONFT = dict()
3 | for i, v in enumerate(fobj):
4 | if (i > 1) and (v.strip() != ""):
5 | CodUTE = v[0:6].decode('utf-8').strip()
6 | nome = v[6:20].decode('utf-8').strip()
7 | ssis = v[20:26].decode('utf-8').strip()
8 | exis = v[26:33].decode('utf-8').strip()
9 | classe = v[33:40].decode('utf-8').strip()
10 | CONFT[CodUTE] = {
11 | 'nome': nome,
12 | 'ssis': ssis,
13 | 'exis': exis,
14 | 'classe': classe}
15 | return CONFT
16 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importDGER.py:
--------------------------------------------------------------------------------
1 | from unidecode import unidecode
2 | import re
3 |
4 | def importDGER(fobj):
5 | DGER = dict()
6 | DGER['info'] = None
7 | for bline in fobj.readlines():
8 | try:
9 | line = bline.decode('utf-8')
10 | except:
11 | line = unidecode(str(bline))
12 | if not DGER['info']:
13 | DGER['info'] = unidecode(str(line)).strip().replace("b'",'').replace('\\xe3','a').replace("\\r\\n'",'')
14 | elif line.rfind('TIPO DE EXECUCAO') != -1:
15 | DGER['tipoexec'] = int(line[21:25].strip())
16 | elif line.rfind('DURACAO DO PERIODO') != -1:
17 | DGER['durper'] = int(line[21:26].strip())
18 | elif line.rfind('No. DE ANOS DO EST') != -1:
19 | DGER['nyears'] = int(line[21:26].strip())
20 | # elif line.rfind('MES INICIO PRE-EST') != -1:
21 | # DGER['nmpre'] = int(line[21:26].strip())
22 | elif line.rfind('MES INICIO PRE-EST') != -1:
23 | DGER['mipre'] = int(line[21:26].strip())
24 | elif line.rfind('No. DE ANOS PRE') != -1:
25 | DGER['nypre'] = int(line[21:26].strip())
26 | elif line.rfind('No. DE ANOS POS FINAL') != -1:
27 | DGER['nyposf'] = int(line[21:26].strip())
28 | elif line.rfind('No. DE ANOS POS') != -1:
29 | DGER['nypos'] = int(line[21:26].strip())
30 | elif line.rfind('MES INICIO DO ESTUDO') != -1:
31 | # DGER['mi'] = int(line[21:26].strip())
32 | temp = re.findall(r'-?\d+\.?\d*', line)
33 | DGER['mi'] = int(temp[0])
34 |
35 | elif line.rfind('ANO INICIO DO ESTUDO') != -1:
36 | DGER['yi'] = int(line[21:26].strip())
37 | elif line.rfind('No DE SERIES SINT.') != -1:
38 | DGER['nsint'] = int(line[21:26].strip())
39 | elif line.rfind('ANO INICIAL HIST.') != -1:
40 | DGER['hiy'] = int(line[21:26].strip())
41 | elif line.rfind('CALCULA VOL.INICIAL') != -1:
42 | DGER['calcvolini'] = int(line[21:26].strip())
43 | elif line.rfind('TIPO SIMUL. FINAL') != -1:
44 | DGER['tsimfinal'] = int(line[21:26].strip())
45 | elif line.rfind('RACIONAMENTO PREVENT.') != -1:
46 | DGER['raciopreven'] = int(line[22:26].strip())
47 | elif line.rfind("No. ANOS MANUT.UTE'S") != -1:
48 | DGER['anosmanutt'] = int(line[22:26].strip())
49 | elif line.rfind("TENDENCIA HIDROLOGICA") != -1:
50 | DGER['tendhidr'] = unidecode(str(line[2:35])).strip()
51 | #(int(line[23:26].strip()),int(line[27:31].strip()))
52 | elif line.rfind('DURACAO POR PATAMAR') != -1:
53 | DGER['durppat'] = int(line[22:26].strip())
54 | elif line.rfind('OUTROS USOS DA AGUA') != -1:
55 | DGER['usoagua'] = int(line[22:26].strip())
56 | elif line.rfind('CORRECAO DESVIO') != -1:
57 | DGER['corredesvio'] = int(line[22:26].strip())
58 | elif line.rfind('AGRUPAMENTO LIVRE') != -1:
59 | DGER['agruplivre'] = int(line[22:26].strip())
60 | elif line.rfind('REPRESENT.SUBMOT.') != -1:
61 | DGER['repressubmot'] = int(line[22:26].strip())
62 | elif line.rfind('CONS. CARGA ADICIONAL') != -1:
63 | DGER['c_adic'] = int(line[22:26].strip())
64 | elif line.rfind('DESP. ANTEC. GNL') != -1:
65 | DGER['despantgnl'] = int(line[22:26].strip())
66 | elif line.rfind('MODIF.AUTOM.ADTERM') != -1:
67 | DGER['modifautoadterm'] = int(line[22:26].strip())
68 | elif line.rfind('CONSIDERA GHMIN') != -1:
69 | DGER['ghmin'] = int(line[22:26].strip())
70 | elif line.rfind('SAR') != -1:
71 | try:
72 | DGER['sar'] = int(line[22:26].strip())
73 | except:
74 | temp = re.findall(r'-?\d+\.?\d*', line)
75 | DGER['sar'] = int(temp[0])
76 |
77 | elif line.rfind('CVAR') != -1:
78 | try:
79 | DGER['cvar'] = int(line[22:26].strip())
80 | except:
81 | temp = re.findall(r'-?\d+\.?\d*', line)
82 | DGER['cvar'] = int(temp[0])
83 |
84 | elif line.rfind('TAXA DE DESCONTO') != -1:
85 | try:
86 | DGER['txdesc'] = float(line[20:26].strip())
87 | except:
88 | DGER['txdesc'] = float(line[:10].strip())
89 |
90 |
91 | DGER['ni'] = (12 - int(DGER['mi'])+1) + (int(DGER['nyears']) - 1) * 12
92 | anosplan = list()
93 | ano_ini = DGER['yi']
94 | for anos in range(DGER['nyears']):
95 | anosplan.append(ano_ini+anos)
96 | DGER['yph'] = anosplan
97 | fobj.close()
98 | return DGER
99 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importDSVAGUA.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.imputils import line2list
2 |
3 |
4 | def importDSVAGUA(fdata, uhes, dger, decktype=False):
5 | DSVAGUA = dict()
6 | for i in uhes:
7 | DSVAGUA[i] = [0.0] * dger['ni']
8 | cline = 2
9 | tipo = ''
10 | valores = [0.0] * dger['ni']
11 | inicio = 0
12 | while fdata[cline].strip() != '9999':
13 | # print 'fdata[cline] = ',fdata[cline]
14 | ano = fdata[cline][0:4].strip()
15 | if ano in str(dger['yph']):
16 | CodUHE = fdata[cline][5:9].strip()
17 | # Verificar se eh linha de tipagem, se nao for vai seguir com o tipo anterior.
18 | if len(fdata[cline]) > 104:
19 | # linha de tipo
20 | valores = list()
21 | tipo = fdata[cline][102:120].strip()
22 | if (tipo == 'Usos_Consuntivos'):
23 | txttipo = 'usocon'
24 | else:
25 | txttipo = 'vazrem'
26 | if int(ano) == dger['yi']:
27 | mesini = dger['mi']
28 | else:
29 | mesini = 1
30 | valores = line2list(dline=fdata[cline][9:95], mi=mesini, ar=ano,
31 | mf=12, bloco=7, vlista=valores, dger=dger)
32 | if ano == str(dger['yph'][-1]):
33 | DSVAGUA[CodUHE] = list(
34 | map(sum, zip(DSVAGUA[CodUHE], valores)))
35 | cline += 1
36 | return DSVAGUA
37 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importEXPH.py:
--------------------------------------------------------------------------------
1 | def importEXPH(fdata):
2 | ENCHVM = dict()
3 | MOTORI = dict()
4 | cline = 3
5 | while cline < len(fdata):
6 | coduhe = fdata[cline][0:5].strip()
7 | nome = fdata[cline][5:18].strip()
8 | if fdata[cline][17:43].strip():
9 | # Enchimento de Volume Morto
10 | iniench = fdata[cline][17:26].strip()
11 | durmeses = fdata[cline][26:36].strip()
12 | pct = fdata[cline][36:43].strip()
13 | cline = cline + 1
14 | ENCHVM[coduhe] = {'nome': nome, 'iniench': iniench,
15 | 'durmeses': durmeses, 'pct': pct}
16 | else:
17 | ENCHVM[coduhe] = {'nome': nome, 'iniench': None,
18 | 'durmeses': None, 'pct': None}
19 | if fdata[cline].strip() == '9999':
20 | cline = cline + 1
21 | continue
22 | # Motorizacao
23 | MOTORI[coduhe] = list()
24 | while fdata[cline].strip() != '9999':
25 | # loop no bloco
26 | maq = None
27 | conj = None
28 | if len(fdata[cline]) > 61:
29 | # Formato novo, entao informa nro de maq e do cnj
30 | # if fdata[cline][:4].strip() == '':
31 | if '(' in fdata[cline]:
32 | portion = fdata[cline].split('(')[1].split(')')[0].split()
33 | maq = int(portion[0])
34 | conj = int(portion[1])
35 | else:
36 | maq = int(fdata[cline][59:62].strip().replace('(', ''))
37 | conj = int(fdata[cline][62:].strip())
38 | dataexp = fdata[cline][33:51].strip()
39 | potexp = fdata[cline][51:59].strip()
40 | MOTORI[coduhe].append({'data': dataexp, 'pot': potexp,
41 | 'nome': nome, 'maq': maq, 'conj': conj})
42 | cline = cline + 1
43 | return ENCHVM, MOTORI
44 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importEXPT.py:
--------------------------------------------------------------------------------
1 | def importEXPT(fobj, utes):
2 | EXPT = dict()
3 | for i in utes:
4 | EXPT[i] = list()
5 | for i, v in enumerate(fobj):
6 | if (i > 1) and (v.strip() != ""):
7 | CodUTE = v[0:4].decode('utf-8').strip()
8 | tipo = v[5:11].decode('utf-8').strip()
9 | modif = v[12:20].decode('utf-8').strip()
10 | mi = v[20:23].decode('utf-8').strip()
11 | anoi = v[23:28].decode('utf-8').strip()
12 | mf = v[28:31].decode('utf-8').strip()
13 | anof = v[31:36].decode('utf-8').strip()
14 | EXPT[CodUTE].append({
15 | 'tipo': tipo,
16 | 'modif': modif,
17 | 'mi': mi,
18 | 'anoi': anoi,
19 | 'mf': mf,
20 | 'anof': anof
21 | })
22 | return EXPT
23 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importMANUTT.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def importMANUTT(fobj, utes):
4 | MANUTT = dict()
5 | for i in utes:
6 | MANUTT[i] = list()
7 | for i, v in enumerate(fobj):
8 | if (i > 1) and (v.strip() != ""):
9 | CodUTE = v[17:20].strip()
10 | nome = v[20:35].strip()
11 | unidtermica = v[38:40].strip()
12 | data = v[40:49].strip()
13 | dia = data[0:2].strip()
14 | mes = data[2:4].strip()
15 | ano = data[4:8].strip()
16 | dur = v[49:53].strip()
17 | pot = v[55:64].strip()
18 | MANUTT[CodUTE.decode('utf-8')].append(
19 | {'nome': nome.decode('utf-8'),
20 | 'unidtermica': unidtermica.decode('utf-8'),
21 | 'data': data.decode('utf-8'),
22 | 'dia': dia.decode('utf-8'),
23 | 'mes': mes.decode('utf-8'),
24 | 'ano': ano.decode('utf-8'),
25 | 'dur': dur.decode('utf-8'),
26 | 'pot': pot.decode('utf-8')})
27 | return MANUTT
28 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importMODIF.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def importMODIF(fobj):
4 | MODIF = dict()
5 | CodUHE = ''
6 | for i, rv in enumerate(fobj):
7 | v = rv.decode('utf-8')
8 | cols = v.split()
9 | if i > 1 and 'USINA' in v:
10 | CodUHE = cols[1].strip()
11 | elif i > 1 and v.strip() != "":
12 | tipo = cols[0].strip().upper()
13 | indice = ''
14 | mes = ''
15 | ano = ''
16 | if tipo == 'NUMMAQ':
17 | modif = [cols[1].strip(), cols[2].strip()]
18 | elif tipo in ['VOLMAX', 'VOLMIN']:
19 | modif = cols[1].strip()
20 | indice = cols[2].strip()
21 | elif tipo in ['VAZMIN', 'NUMCNJ', 'PRODESP', 'TEIF', 'IP',
22 | 'PERDHIDR', 'NUMBAS']:
23 | modif = cols[1]
24 | elif tipo in ['COEFEVAP']:
25 | modif = cols[1]
26 | mes = cols[2]
27 | elif tipo in ['VMAXT', 'VMINT', 'CFUGA', 'VMINP', 'VAZMINT',
28 | 'CMONT']:
29 | mes = cols[1].strip()
30 | ano = cols[2].strip()
31 | modif = cols[3].strip()
32 | if len(cols) > 4:
33 | indice = cols[4].strip()
34 | else:
35 | # NUMMAQ, POTEFE, COTAREA, VOLCOTA
36 | modif = ' '.join([x.strip() for x in cols[1:]])
37 | if CodUHE not in MODIF:
38 | MODIF[CodUHE] = list()
39 | MODIF[CodUHE].append({'tipo': tipo,
40 | 'modif': modif,
41 | 'mes': mes,
42 | 'ano': ano,
43 | 'indice': indice})
44 | return MODIF
45 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importPATAMAR.py:
--------------------------------------------------------------------------------
1 | from deckparser.importers.imputils import line2list, searchInList
2 |
3 |
4 | def importPATAMAR(fdata, dger, sss):
5 | # Inicia a leitura do arquivo
6 | # Verificar o numero de patamares de carga
7 | idxline = 2
8 | numpatamarcarga = int(fdata[idxline].strip())
9 | # Ler duracao dos patamares
10 | idxline = 6
11 | PATDURA = dict()
12 | PATCARGA = dict()
13 | PATINTER = list()
14 | PATNSIM = dict() # multiplicadores das usinas não simuladas
15 | if numpatamarcarga == 1:
16 | PATDURA[0] = [1.0] * dger['ni']
17 | else:
18 | for i in range(numpatamarcarga):
19 | PATDURA[i] = list()
20 | for idss in sss:
21 | PATCARGA[idss] = dict()
22 | for i in range(numpatamarcarga):
23 | PATCARGA[idss][i] = list()
24 | for _anoi in range(dger['nyears']):
25 | if fdata[idxline][0:5].strip() in str(dger['yph']):
26 | anopat = fdata[idxline][0:5].strip()
27 | for idpat in range(numpatamarcarga):
28 | # Lendo os valores
29 | mesini = 1
30 | if int(anopat) == dger['yi']:
31 | mesini = dger['mi']
32 | PATDURA[idpat] = line2list(dline=fdata[idxline][5:102],
33 | mi=mesini, ar=anopat, mf=12,
34 | bloco=8, vlista=PATDURA[idpat],
35 | dger=dger)
36 | idxline = idxline + 1
37 | # Carregar os patamares de carga
38 | idxline = searchInList(fdata, 'CARGA(P.U.DEMANDA MED.)')['line'] + 2
39 | for idxss in sss:
40 | ssis = int(fdata[idxline].strip())
41 | if numpatamarcarga == 1:
42 | PATCARGA[str(ssis)] = [[1.0] * dger['ni']]
43 | continue
44 | if int(idxss) == ssis:
45 | idxline = idxline + 1
46 | for _anoi in range(dger['nyears']):
47 | if fdata[idxline][0:8].strip() in str(dger['yph']):
48 | anopat = fdata[idxline][0:8].strip()
49 | for idpat in range(numpatamarcarga):
50 | # Lendo os valores
51 | mesini = 1
52 | if int(anopat) == dger['yi']:
53 | mesini = dger['mi']
54 | PATCARGA[str(ssis)][idpat] = line2list(
55 | dline=fdata[idxline][7:91], mi=mesini, ar=anopat,
56 | mf=12, bloco=7, vlista=PATCARGA[str(ssis)][idpat],
57 | dger=dger)
58 | idxline = idxline + 1
59 |
60 | idxline = searchInList(fdata, 'INTERCAMBIO(P.U.INTERC.MEDIO)')['line'] + 2
61 | while len(fdata[idxline].strip()) > 0 and numpatamarcarga > 1:
62 | vals = fdata[idxline].split()
63 | # if fdata[idxline][0:2].strip() == '' and fdata[idxline][0:5].strip() != '':
64 | if len(vals) == 2:
65 | ori, des = vals
66 | # ori = fdata[idxline][0:5].strip()
67 | # des = fdata[idxline][5:8].strip()
68 | PATINTER.append({'ORI': ori, 'DES': des,
69 | 'pat1': list(), 'pat2': list(), 'pat3': list(),
70 | 'pat4': list(), 'pat5': list(), 'pat6': list()})
71 | idxline = idxline + 1
72 | for _anoi in range(dger['nyears']):
73 | if fdata[idxline][0:8].strip() in str(dger['yph']):
74 | anopat = fdata[idxline][0:8].strip()
75 | for idpat in range(numpatamarcarga):
76 | # Lendo os valores
77 | mesini = 1
78 | if int(anopat) == dger['yi']:
79 | mesini = dger['mi']
80 | PATINTER[len(PATINTER) - 1]['pat' + str(idpat + 1)] =\
81 | line2list(dline=fdata[idxline][7:91],
82 | mi=mesini, ar=anopat,
83 | mf=12, bloco=7,
84 | vlista=PATINTER[len(PATINTER) - 1][
85 | 'pat' + str(idpat + 1)],
86 | dger=dger)
87 | idxline = idxline + 1
88 | elif len(vals) == 1 and vals[0] == '9999':
89 | break
90 | if idxline >= len(fdata):
91 | break
92 |
93 | idx = searchInList(fdata,
94 | 'BLOCO DE USINAS NAO SIMULADAS (P.U. ENERGIA MEDIA)')
95 | if idx == {}:
96 | idx = searchInList(fdata,
97 | 'BLOCO DE USINAS NAO SIMULADAS (P.U. MONTANTE MED.)')
98 |
99 | idxline = idx['line'] + 2 if idx else None
100 | for idss in sss:
101 | PATNSIM[idss] = dict()
102 | for i in range(numpatamarcarga):
103 | PATNSIM[idss][i] = {
104 | 'bloco01': [],
105 | 'bloco02': [],
106 | 'bloco03': [],
107 | 'bloco04': [],
108 | 'bloco05': [],
109 | 'bloco06': [],
110 | 'bloco07': [],
111 | 'bloco08': [],
112 | 'bloco09': [],
113 | 'bloco10': []
114 | }
115 |
116 | if idxline:
117 | while len(fdata[idxline].strip()) > 0 and numpatamarcarga > 1:
118 | vals = fdata[idxline].split()
119 |
120 | if len(vals) == 2:
121 | subsis, bloc = vals
122 | idxline = idxline + 1
123 |
124 | for _anoi in range(dger['nyears']):
125 | anopat = fdata[idxline][0:8].strip()
126 | if anopat in str(dger['yph']):
127 | for idpat in range(numpatamarcarga):
128 | # Lendo os valores
129 | mesini = 1
130 | if int(anopat) == dger['yi']:
131 | mesini = dger['mi']
132 | PATNSIM[subsis][idpat]['bloco0'+str(bloc)] =\
133 | line2list(dline=fdata[idxline][7:91],
134 | mi=mesini,
135 | ar=anopat,
136 | mf=12,
137 | bloco=7,
138 | vlista=PATNSIM[subsis][idpat][
139 | 'bloco0'+str(bloc)],
140 | dger=dger)
141 | idxline = idxline + 1
142 | elif len(vals) == 1 and vals[0] == '9999':
143 | break
144 | if idxline >= len(fdata):
145 | break
146 |
147 | return PATDURA, PATCARGA, PATINTER, PATNSIM, numpatamarcarga
148 |
149 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importREE.py:
--------------------------------------------------------------------------------
1 | from collections import OrderedDict as odict
2 |
3 | def importREE(fobj):
4 | REE = odict()
5 | REElabels = odict()
6 | # Skip header:
7 | # SUBSISTEMAS X SUBMERCADOS
8 | # NUM|NOME SSIS.| SUBM
9 | # XXX|XXXXXXXXXX| XXX
10 | content = [x.decode('utf-8') for x in fobj]
11 | for i, v in enumerate(content[3:]):
12 | if v.strip() != '' and v.strip() != '999':
13 | # cols = v.strip().split()
14 | # subsis = cols[0].strip()
15 | # label = cols[1].strip()
16 | # submer = cols[2].strip()
17 | # REE[subsis] = submer
18 | # REElabels[subsis] = label
19 |
20 | subsis = v[2:4].strip()
21 | label = v[5:15].strip()
22 | submer = v[18:21].strip()
23 | REE[subsis] = submer
24 | REElabels[subsis] = label
25 |
26 | return REE, REElabels
27 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importSHIST.py:
--------------------------------------------------------------------------------
1 | def importSHIST(fobj):
2 | return fobj[2][4:].strip()
--------------------------------------------------------------------------------
/deckparser/importers/newave/importTERM.py:
--------------------------------------------------------------------------------
1 |
2 | def importTERM(fobj):
3 | TERM = dict()
4 | for i, v in enumerate(fobj):
5 | if (i > 1) and (v.strip() != ""):
6 | CodUTE = v[0:4].decode('utf-8').strip()
7 | nome = v[5:18].decode('utf-8').strip()
8 | pot = v[19:24].decode('utf-8').strip()
9 | fcmax = v[25:30].decode('utf-8').strip()
10 | teif = v[31:38].decode('utf-8').strip()
11 | ip = v[39:45].decode('utf-8').strip()
12 | gtmin1 = v[45:52].decode('utf-8').strip()
13 | gtmin2 = v[52:59].decode('utf-8').strip()
14 | gtmin3 = v[59:66].decode('utf-8').strip()
15 | gtmin4 = v[66:73].decode('utf-8').strip()
16 | gtmin5 = v[73:80].decode('utf-8').strip()
17 | gtmin6 = v[80:87].decode('utf-8').strip()
18 | gtmin7 = v[87:94].decode('utf-8').strip()
19 | gtmin8 = v[94:101].decode('utf-8').strip()
20 | gtmin9 = v[101:108].decode('utf-8').strip()
21 | gtmin10 = v[108:115].decode('utf-8').strip()
22 | gtmin11 = v[115:122].decode('utf-8').strip()
23 | gtmin12 = v[122:129].decode('utf-8').strip()
24 | gtdmais = v[129:137].decode('utf-8').strip()
25 | TERM[CodUTE]={
26 | 'nome': nome,
27 | 'pot': pot,
28 | 'fcmax': fcmax,
29 | 'teif': teif,
30 | 'ip': ip,
31 | 'gtmin1': gtmin1,
32 | 'gtmin2': gtmin2,
33 | 'gtmin3': gtmin3,
34 | 'gtmin4': gtmin4,
35 | 'gtmin5': gtmin5,
36 | 'gtmin6': gtmin6,
37 | 'gtmin7': gtmin7,
38 | 'gtmin8': gtmin8,
39 | 'gtmin9': gtmin9,
40 | 'gtmin10': gtmin10,
41 | 'gtmin11': gtmin11,
42 | 'gtmin12': gtmin12,
43 | 'gtdmais': gtdmais}
44 | return TERM
45 |
--------------------------------------------------------------------------------
/deckparser/importers/newave/importVAZOES.py:
--------------------------------------------------------------------------------
1 | from numpy import int32, dtype, fromfile
2 | # from datetime import date
3 | from os import remove
4 |
5 |
6 | def importVAZOES(fn, hcount, dger):
7 | lsttipovaz = [('Vazao', int32)]
8 | vaztipo = dtype(lsttipovaz)
9 | vazdata = fromfile(fn, dtype=vaztipo, sep="")
10 | VAZcount = len(vazdata)
11 | VAZOES = dict()
12 | vazl = dict()
13 | # Inicializa dict() de vazoes para cada posto
14 | # anocorrente = date.today().year
15 | for x in range(1, hcount + 1):
16 | VAZOES[x] = dict()
17 | vazl[x] = list()
18 | for i in range(dger['hiy'], int(dger['yi']) + 1):
19 | VAZOES[x][i] = dict()
20 | ano = dger['hiy']
21 | mes = 1
22 | # separar os dados de vazdata em uma lista
23 | vazdatalist = [vaz[0] for vaz in vazdata]
24 | totele = len(vazdatalist)
25 | idxele = 0
26 | while idxele < totele:
27 | posto = 1
28 | while posto <= hcount:
29 | try:
30 | vazvalue = float(vazdatalist[idxele])
31 | except Exception as e:
32 | vazvalue = -1
33 | VAZOES[posto][ano][mes] = vazvalue
34 | vazl[posto].append(vazvalue)
35 | posto = posto + 1
36 | idxele = idxele + 1
37 | if mes < 12:
38 | mes = mes + 1
39 | else:
40 | mes = 1
41 | ano = ano + 1
42 | remove(fn)
43 | return VAZOES, VAZcount, vazl
44 |
--------------------------------------------------------------------------------
/deckparser/importers/suishi/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/deckparser/importers/suishi/__init__.py
--------------------------------------------------------------------------------
/deckparser/newave2dicts.py:
--------------------------------------------------------------------------------
1 | from deckparser.newavezipped import NewaveZipped
2 | from deckparser.newavedicted import NewaveDicted
3 | from deckparser.importers.newave.importDGER import importDGER
4 | from deckparser.importers.newave.importSISTEMA import importSISTEMA
5 | from deckparser.importers.newave.importCAR import importCAR
6 | from deckparser.importers.newave.importTERM import importTERM
7 | from deckparser.importers.newave.importEXPT import importEXPT
8 | from deckparser.importers.newave.importCONFT import importCONFT
9 | from deckparser.importers.newave.importCLAST import importCLAST
10 | from deckparser.importers.newave.importMANUTT import importMANUTT
11 | from deckparser.importers.newave.importHIDR import importHIDR
12 | from deckparser.importers.newave.importCONFHD import importCONFHD
13 | from deckparser.importers.newave.importMODIF import importMODIF
14 | from deckparser.importers.newave.importDSVAGUA import importDSVAGUA
15 | from deckparser.importers.newave.importVAZOES import importVAZOES
16 | from deckparser.importers.newave.importCADIC import importCADIC
17 | from deckparser.importers.newave.importEXPH import importEXPH
18 | from deckparser.importers.newave.importPATAMAR import importPATAMAR
19 | from deckparser.importers.newave.importCADTERM import importCADTERM
20 | from deckparser.importers.newave.importSHIST import importSHIST
21 | from deckparser.importers.newave.importREE import importREE
22 | from logging import info
23 |
24 |
25 | def newave2dicts(fn):
26 | """
27 | Open the zipped file and start to import data into python dicts and lists
28 | """
29 | dz = NewaveZipped(fn=fn)
30 | if dz.zipLoaded():
31 | dd = NewaveDicted()
32 | dd.dirname = dz.dirname
33 | dd.filename = dz.filename
34 | dd.fhash = dz.fhash
35 | dd.DGER = importDGER(dz.openFile(fnp='dger'))
36 | dd.SISTEMA = importSISTEMA(dz.openFileExtData(fnp='sistema'), dd.DGER)
37 | dd.process_ss()
38 | dd.PATDURA, dd.PATCARGA, dd.PATINTER, dd.PATNSIM, dd.np = \
39 | importPATAMAR(dz.openFileExtData(fnp='patamar'), dd.DGER, dd.sss)
40 | dd.CAR = importCAR(dz.openFileExtData(fnp='curva'), dd.DGER)
41 | dd.CADIC = importCADIC(dz.openFileExtData(fnp='c_adic'), dd.DGER)
42 | dd.TERM = importTERM(dz.openFile(fnp='term'))
43 | dd.CADTERM = importCADTERM(dz.openFile(fnp='cadterm'))
44 | dd.EXPT = importEXPT(fobj=dz.openFile(fnp='expt'), utes=dd.TERM.keys())
45 | dd.CONFT = importCONFT(dz.openFile(fnp='conft'))
46 | dd.CLAST, dd.MODIFCLAST = importCLAST(
47 | fobj=dz.openFile(fnp='clast'), utes=dd.TERM.keys(),
48 | nyears=len(dd.DGER['yph']))
49 | dd.MANUTT = importMANUTT(
50 | fobj=dz.openFile(fnp='manutt'), utes=dd.TERM.keys())
51 | dd.HIDR, dd.HIDRcount = importHIDR(fn=dz.extractFile(fnp='hidr'))
52 | for conff in ['confhd', 'confh']:
53 | try:
54 | dd.CONFHD = importCONFHD(dz.openFile(fnp=conff))
55 | except Exception as e:
56 | info('File {} not found. {}'.format(conff, e))
57 | for modiff in ['modif', 'modif55']:
58 | try:
59 | fobj = dz.openFile(fnp=modiff)
60 | if fobj:
61 | dd.MODIF = importMODIF(fobj=fobj)
62 | except Exception as e:
63 | info('File {} not found. {}'.format(modiff, e))
64 | if 'DECK DO PMO' in dz.fns_set:
65 | decktype = 'pmo'
66 | else:
67 | decktype = False
68 | dd.DSVAGUA = importDSVAGUA(
69 | dz.openFileExtData(fnp='dsvagua'), uhes=dd.CONFHD.keys(),
70 | dger=dd.DGER, decktype=decktype)
71 | dd.VAZOES, dd.VAZcount, dd.vaz = importVAZOES(
72 | fn=dz.extractFile(fnp='vazoes'), hcount=dd.HIDRcount,
73 | dger=dd.DGER)
74 | dd.ENCHVM, dd.MOTORI = importEXPH(dz.openFileExtData(fnp='exph'))
75 | dd.SHISTANO = importSHIST(dz.openFileExtData(fnp='shist'))
76 | try:
77 | dd.REE, dd.REElabels = importREE(fobj=dz.openFile(fnp='ree'))
78 | except Exception as e:
79 | info('File REE.dat not found. {}'.format(e))
80 |
81 | # Start parse and processing data
82 | # Split data into data structs considering the configuration
83 | # for the planning study
84 | # dd.prepareDataStructs()
85 | # Apply MODIF - parse updating info and apply into structures created
86 | # in the last method
87 | # dd.processMODIF()
88 | # Create time series for certain values
89 | # dd.preparePosMODIFDataStructs()
90 | # Aplicar EXPH
91 | # dd.processEXPH()
92 | # Process Thermal Series
93 | # dd.processThermalPlants()
94 | # Process inflows
95 | # dd.loadVAZData()
96 |
97 | return dd
98 | else:
99 | return None
100 |
--------------------------------------------------------------------------------
/deckparser/newavezipped.py:
--------------------------------------------------------------------------------
1 | import zipfile
2 | import os
3 | from uuid import uuid4 as hasher
4 | from logging import info
5 |
6 |
7 | class NewaveZipped(object):
8 | def __init__(self, fn=None):
9 | # arquivo zipado que sera aberto
10 | self.z = None
11 | self.dirname = None
12 | self.zipfilename = None
13 | self.filename = None
14 | self.fhash = None
15 | self.fns_set = None
16 | self.internal_dir = None
17 | if fn:
18 | self.setZipFile(fn)
19 | self.openZip()
20 | else:
21 | self.fn = None
22 |
23 | def __del__(self):
24 | if self.z:
25 | self.z.close()
26 |
27 | def zipLoaded(self):
28 | if self.z:
29 | return True
30 | else:
31 | return False
32 |
33 | def setZipFile(self, fn):
34 | self.fn = fn
35 |
36 | def openZip(self):
37 | if zipfile.is_zipfile(self.fn):
38 | self.z = zipfile.ZipFile(self.fn, 'r')
39 | real_path = os.path.realpath(self.fn)
40 | self.dirname = os.path.dirname(real_path)
41 | self.zipfilename = real_path.split("/")[-1]
42 | self.filename = self.zipfilename.split(".")[-2]
43 | self.fhash = str(hasher())
44 | self.fns_set = dict()
45 | for fn in self.z.namelist():
46 | if '/' in fn:
47 | intdpath = '/'.join(fn.split('/')[0:-1]) + '/'
48 | if not self.internal_dir:
49 | self.internal_dir = intdpath
50 | if self.internal_dir and self.internal_dir != intdpath:
51 | raise Exception('Multiple directories inside a zipfile.')
52 | if self.internal_dir:
53 | evfn = fn.split('/')[-1].split('.')[0].upper()
54 | else:
55 | evfn = fn.split('.')[0].upper()
56 | if evfn == 'MODIF1':
57 | key = 'MODIF'
58 | else:
59 | key = evfn
60 | self.fns_set[key] = fn
61 | # Check if it is a deck
62 | deckfiles = ['dger', 'sistema', 'confh', 'confhd']
63 | zipfiles = list(self.fns_set.keys())
64 | if not all([fd.upper() in str(zipfiles) for fd in deckfiles]):
65 | # if not set([fd.lower() for fd in deckfiles]).issubset(set([fz.lower() for fz in zipfiles])):
66 | info('The file doesn\'t look like a deck file.')
67 | # Check if it has a zip
68 | possible_decks = dict()
69 | for kfname, fname in self.fns_set.items():
70 | if '.zip' in fname.lower():
71 | for obj in self.z.infolist():
72 | possible_decks[obj.filename] = obj.date_time
73 | break
74 | if len(possible_decks) > 0:
75 | newer = None
76 | for k, v in possible_decks.items():
77 | if not newer:
78 | newer = (k, v)
79 | else:
80 | if v > newer[1]:
81 | newer = (k, v)
82 | self.z.extract(newer[0], '/tmp')
83 | self.setZipFile('/tmp/' + newer[0])
84 | self.openZip()
85 | else:
86 | raise Exception('The file opened is not a deck file and doesn\'t have a deck in its files.')
87 |
88 | else:
89 | info(self.fn + " is not a zip file")
90 |
91 | def openFile(self, fnp):
92 | try:
93 | fname = self.fns_set[fnp.upper()]
94 | f = self.z.open(fname)
95 | return f
96 | except Exception:
97 | info('Fail to open ', fnp)
98 | print('Fail to open ', fnp)
99 | return False
100 |
101 | def openFileExtData(self, fnp):
102 | try:
103 | fname = self.fns_set[fnp.upper()]
104 | self.z.extract(fname, self.dirname)
105 | if self.internal_dir:
106 | destfile = self.dirname + '/' + self.internal_dir + '/' + self.fhash + '_' + fname.split('/')[-1]
107 | else:
108 | destfile = self.dirname + "/" + self.fhash + '_' + fname
109 | os.rename(self.dirname + "/" + fname, destfile)
110 | try:
111 | f = open(destfile, 'r')
112 | data = f.readlines()
113 | f.close()
114 | os.remove(destfile)
115 | return data
116 | except:
117 | f = open(destfile, 'r', encoding='iso8859-1')
118 | data = f.readlines()
119 | f.close()
120 | os.remove(destfile)
121 | return data
122 | except Exception:
123 | info('Fail to extract ', fnp)
124 | return False
125 |
126 | def extractFile(self, fnp):
127 | try:
128 | fname = self.fns_set[fnp.upper()]
129 | self.z.extract(fname, self.dirname)
130 | if self.internal_dir:
131 | destfile = self.dirname + '/' + self.internal_dir + '/' + self.fhash + '_' + fname.split('/')[-1]
132 | else:
133 | destfile = self.dirname + "/" + self.fhash + '_' + fname
134 | os.rename(self.dirname + "/" + fname, destfile)
135 | return destfile
136 | except Exception:
137 | info('Fail to extract ', fnp)
138 | return False
139 |
--------------------------------------------------------------------------------
/deckparser/suishi2dicts.py:
--------------------------------------------------------------------------------
1 | from deckparser.suishizipped import SuishiZipped
2 | # from deckparser.suishidicted import SuishiDicted
3 | from deckparser.importers.suishi.loader import Loader
4 | # from deckparser.importers.newave.importDGER import importDGER
5 | # from deckparser.importers.newave.importSISTEMA import importSISTEMA
6 | # from deckparser.importers.newave.importCAR import importCAR
7 | # from deckparser.importers.newave.importTERM import importTERM
8 | # from deckparser.importers.newave.importEXPT import importEXPT
9 | # from deckparser.importers.newave.importCONFT import importCONFT
10 | # from deckparser.importers.newave.importCLAST import importCLAST
11 | # from deckparser.importers.newave.importMANUTT import importMANUTT
12 | # from deckparser.importers.newave.importHIDR import importHIDR
13 | # from deckparser.importers.newave.importCONFHD import importCONFHD
14 | # from deckparser.importers.newave.importMODIF import importMODIF
15 | # from deckparser.importers.newave.importDSVAGUA import importDSVAGUA
16 | # from deckparser.importers.newave.importVAZOES import importVAZOES
17 | # from deckparser.importers.newave.importCADIC import importCADIC
18 | # from deckparser.importers.newave.importEXPH import importEXPH
19 | # from deckparser.importers.newave.importPATAMAR import importPATAMAR
20 | # from deckparser.importers.newave.importCADTERM import importCADTERM
21 | # from deckparser.importers.newave.importSHIST import importSHIST
22 | # from deckparser.importers.newave.importREE import importREE
23 | # from logging import info
24 |
25 |
26 | def suishi2dicts(fn):
27 | """
28 | Open the zipped file and start to import data into python dicts and lists
29 | """
30 | dz = SuishiZipped(fn=fn)
31 | if dz.zipLoaded():
32 | loader = Loader(dz)
33 | return loader.dd
34 | else:
35 | return None
36 |
--------------------------------------------------------------------------------
/deckparser/suishidicted.py:
--------------------------------------------------------------------------------
1 | # from logging import info
2 |
3 | # Chaves do MODIF Processadas:
4 | # VOLMIN : ok
5 | # VOLMAX : ok
6 | # VAZMIN : ok
7 | # NUMCNJ : ok
8 | # NUMMAQ : ok
9 | # POTEFE : ok
10 | # PRODESP : ok
11 | # TEIF : ok
12 | # IP : ok
13 | # PERDHIDR : ok
14 | # COEFEVAP : ok
15 | # COTAAREA : ok
16 | # VOLCOTA : ok
17 | # CFUGA : ok
18 | # VMAXT : ok
19 | # VMINT : ok
20 | # NUMBAS : ok
21 | # VMINP : nao tratado
22 | # VAZMINT : ok
23 |
24 |
25 | class SuishiDicted(object):
26 | def __init__(self):
27 | # Arquivo do Caso
28 | self.CaseStudy = 'caso.dat' # Nome do arquivo que contém o nome dos arquivos de entrada utilizados pelo modelo SUISHI.
29 | self.CaseFile = 'arquivo.eas' # Nome dos Arquivos Utilizados pelo Programa
30 | # Nomes contidos no arquivo, formato A12 a partir da coluna 31
31 | self.functions = [
32 | {'type': 'DGER', 'name': 'importDGER'}, # dger.eas - ARQUIVO DE DADOS GERAIS,
33 | {'type': 'SISTEMA', 'name': 'importSISTEMA'}, # sistema.eas - ARQUIVO DADOS DOS SUBSISTEMA
34 | {'type': 'CONFH', 'name': 'importCONFH'}, # confh.eas - ARQUIVO CONFIG HIDROELETRICA
35 | {'type': 'TERM', 'name': 'importTERM'}, # term.eas - ARQUIVO CONFIGURACAO TERMICA
36 | {'type': 'CLAST', 'name': 'importCLAST'}, # clast.eas - ARQUIVO DADOS CLASSES TERMIC
37 | {'type': 'MODIF', 'name': 'importMODIF'} # modif55.eas - ARQUIVO ALTERACAO USIN HIDRO
38 | # 'EXPANSAO': 'expansao.eas', # ARQUIVO EXPANSAO HIDROTERMIC
39 | # 'VAZOES': 'vazoes.dat', # ARQUIVO DE VAZOES
40 | # 'POSTOS': 'postos.dat', # ARQUIVO DE POSTOS
41 | # 'HIDR': 'hidr.dat', # ARQUIVO DADOS USINAS HIDRO
42 | # 'DSVAGUA': 'dsvagua.eas', # ARQUIVO USOS ALTERNATIVOS
43 | # 'CORTESH': 'cortesh.eas' # ARQUIVO AUXILIAR DA FCF
44 | # 'CORTES' 'cortes.eas', # ARQUIVO FUNCAO CUSTO FUTURO
45 | # 'NEWDESP': 'newdesp.eas', # ARQUIVO NEWAVE DADOS CONFIG
46 | # 'SHP': 'shp.eas', # ARQUIVO INPUT SIMUL PARAIBA
47 | # 'PEQUSI': 'pequsi.eas', # ARQUIVO PEQUENAS USINAS
48 | # 'PATAMAR': 'patamar.eas', # ARQUIVO DE PATAMARES MERCADO
49 | # 'EAFPAST': 'eafpast.mlt', # ARQUIVO C/TEND. HIDROLOGICA
50 | # 'ITAIPU': 'itaipu.dat', # ARQUIVO RESTRICAO ITAIPU
51 | # 'BID': 'bid.dat', # ARQUIVO DEMAND SIDE BIDDING
52 | # 'C_ADIC': 'c_adic.dat', # ARQUIVO CARGAS ADICIONAIS
53 | # 'LOSS': 'loss.dat', # ARQUIVO PERDA DE TRANSMISSAO
54 | # 'SUISHI_REL': 'suishi.rel', # ARQUIVO RELATORIO
55 | # 'DIRETOR': 'diretor.csv', # ARQUIVO RESUMO OPER MENSAL
56 | # 'SUBSIS': 'subsis.csv', # ARQUIVO RESUMO SUBSISTEMA
57 | # 'USIHID': 'usihid.csv', # ARQUIVO RESUMO USI HIDROELET
58 | # 'USITER': 'usiter.csv', # ARQUIVO RESUMO USI TERMICA
59 | # 'PBSUISHI': 'pbsuishi.csv', # ARQUIVO OUTPUT PARAIBA SUL
60 | # 'PDISP': 'pdisp.dat', # ARQUIVO POTENCIA CONFIABILID
61 | # 'GTMINPAT': 'gtminpat.dat', # ARQUIVO FATOR G.TERMICA MIN.
62 | # 'ATIETE': 'atiete.dat', # ARQUIVO PAR. ALTO TIETE
63 | # 'CURVA_ARM': 'curva.eas', # ARQUIVO PAR. CURVA ARM MIN
64 | # 'GERTER': 'gerter.csv', # ARQUIVO GER. TERM. / CLASSE
65 | # 'INTER': 'inter.csv', # ARQUIVO DE INTERCAMBIOS
66 | # 'EFGA': 'efga.csv', # ARQUIVO ENERGIA FIRME/ASSEG
67 | # 'ADTERM': 'adterm.dat', # ARQUIVO DESP. TERM. ANTEC.
68 | # 'SAR': 'sar.dat', # ARQUIVO AVERSAO A RISCO SAR
69 | # 'CVAR': 'cvar.dat', # ARQUIVO AVERSAO A RISCO CVAR
70 | # 'CGUIAOP': 'cguiaop.dat', # ARQUIVO CURVA GUIA DE OPER.
71 | # 'VMAXTSAZ': 'vmaxtsaz.dat', # ARQUIVO VMAXT SAZONAL - E.F.
72 | # 'VAZMINTO': 'vazmintp.dat', # ARQUIVO VAZMINT PERIODO PRE
73 | # 'REE_DADOS': 'ree.dat', # ARQUIVO DE DADOS RES EQUIV
74 | # 'REE_RESUMO': 'ree.csv', # ARQUIVO RESUMO RESV EQUIVAL
75 | # }
76 | ]
77 | # Dados de sistema e problema
78 | self.DGER = None
79 | # self.SISTEMA = None
80 | # self.CAR = None
81 | # self.CADIC = None
82 | # self.PATDURA = None
83 | # self.PATCARGA = None
84 | # self.PATINTER = None
85 |
86 | # # Dados do parque termelétrico
87 | # self.TERM = None
88 | # self.CONFT = None
89 | # self.CADTERM = None
90 | # self.EXPT = None
91 | # self.CLAST = None
92 | # self.MODIFCLAST = None
93 | # self.MANUTT = None
94 |
95 | # # Dados do parque hidrelétrico
96 | # self.CONFHD = None
97 | # self.HIDR = None
98 | # self.HIDRcount = None
99 | self.MODIF = None
100 | # self.DSVAGUA = None
101 | # self.VAZOES = None
102 |
103 | # self.VAZcount = None
104 | # self.VAZMAX = None
105 |
106 | # self.ENCHVM = None
107 | # self.MOTORI = None
108 |
109 | # self.REE = None
110 | # self.REElabels = None
111 |
112 | # # Helpers
113 | # self.nss = None
114 | # self.sss = None
115 | # self.ssname = None
116 | # self.ssfict = None
117 |
--------------------------------------------------------------------------------
/deckparser/suishizipped.py:
--------------------------------------------------------------------------------
1 | import zipfile
2 | import os
3 | from uuid import uuid4 as hasher
4 | # from datetime import datetime
5 | from logging import info
6 | # from os.path import realpath, dirname
7 |
8 |
9 | class SuishiZipped(object):
10 | def __init__(self, fn=None):
11 | # arquivo zipado que sera aberto
12 | self.z = None
13 | self.dirname = None
14 | self.zipfilename = None
15 | self.filename = None
16 | self.fhash = None
17 | self.fns_set = None
18 | self.internal_dir = None
19 | if fn:
20 | self.setZipFile(fn)
21 | self.openZip()
22 | else:
23 | self.fn = None
24 |
25 | def __del__(self):
26 | if self.z:
27 | self.z.close()
28 |
29 | def zipLoaded(self):
30 | if self.z:
31 | return True
32 | else:
33 | return False
34 |
35 | def setZipFile(self, fn):
36 | self.fn = fn
37 |
38 | def openZip(self):
39 | if zipfile.is_zipfile(self.fn):
40 | self.z = zipfile.ZipFile(self.fn, 'r')
41 | real_path = os.path.realpath(self.fn)
42 | self.dirname = os.path.dirname(real_path)
43 | self.zipfilename = real_path.split("/")[-1]
44 | self.filename = self.zipfilename.split(".")[-2]
45 | self.fhash = str(hasher())
46 | self.fns_set = dict()
47 | for fn in self.z.namelist():
48 | if '/' in fn:
49 | intdpath = '/'.join(fn.split('/')[0:-1]) + '/'
50 | if not self.internal_dir:
51 | self.internal_dir = intdpath
52 | if self.internal_dir and self.internal_dir != intdpath:
53 | raise Exception('Multiple directories inside a zipfile.')
54 | if self.internal_dir:
55 | evfn = fn.split('/')[-1].split('.')[0].upper()
56 | else:
57 | evfn = fn.split('.')[0].upper()
58 | if evfn == 'MODIF1':
59 | key = 'MODIF'
60 | else:
61 | key = evfn
62 | self.fns_set[key] = fn
63 | # Check if it is a deck
64 | deckfiles = ['caso', 'dger', 'sistema', 'confh']
65 | zipfiles = list(self.fns_set.keys())
66 | if not all([fd.upper() in str(zipfiles) for fd in deckfiles]):
67 | # if not set([fd.lower() for fd in deckfiles]).issubset(set([fz.lower() for fz in zipfiles])):
68 | info('The file doesn\'t look like a deck file.')
69 | # Check if it has a zip
70 | possible_decks = dict()
71 | for kfname, fname in self.fns_set.items():
72 | if '.zip' in fname.lower():
73 | for obj in self.z.infolist():
74 | possible_decks[obj.filename] = obj.date_time
75 | break
76 | if len(possible_decks) > 0:
77 | newer = None
78 | for k, v in possible_decks.items():
79 | if not newer:
80 | newer = (k, v)
81 | else:
82 | if v > newer[1]:
83 | newer = (k, v)
84 | self.z.extract(newer[0], '/tmp')
85 | self.setZipFile('/tmp/' + newer[0])
86 | self.openZip()
87 | else:
88 | raise Exception('The file opened is not a deck file and doesn\'t have a deck in its files.')
89 | else:
90 | info(self.fn + " is not a zip file")
91 |
92 | def openFile(self, fnp):
93 | try:
94 | fname = self.fns_set[fnp.upper()]
95 | f = self.z.open(fname)
96 | return f
97 | except Exception:
98 | info('Fail to open ', fnp)
99 | return False
100 |
101 | def openFileName(self, fname):
102 | try:
103 | f = self.z.open(fname)
104 | return f
105 | except Exception:
106 | info('Fail to open ', fname)
107 | return False
108 |
109 | def openFileExtData(self, fnp):
110 | try:
111 | fname = self.fns_set[fnp.upper()]
112 | self.z.extract(fname, self.dirname)
113 | if self.internal_dir:
114 | destfile = self.dirname + '/' + self.internal_dir + '/' + self.fhash + '_' + fname.split('/')[-1]
115 | else:
116 | destfile = self.dirname + "/" + self.fhash + '_' + fname
117 | os.rename(self.dirname + "/" + fname, destfile)
118 | f = open(destfile, 'r')
119 | data = f.readlines()
120 | f.close()
121 | os.remove(destfile)
122 | return data
123 | except Exception as e:
124 | info('Fail to extract {} erro: {}'.format(fnp, e.strerror))
125 | return False
126 |
127 | def extractFile(self, fnp):
128 | try:
129 | fname = self.fns_set[fnp.upper()]
130 | self.z.extract(fname, self.dirname)
131 | destfile = self.dirname + "/" + self.fhash + '_' + fname
132 | os.rename(self.dirname + "/" + fname, destfile)
133 | return destfile
134 | except Exception as e:
135 | info('Fail to extract {} erro: {}'.format(fnp, e))
136 | return False
137 |
--------------------------------------------------------------------------------
/imgs/logo_ped_aneel.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/venidera/deckparser/35b78e4dfe5fd626482f2826abf2741ecbc7081e/imgs/logo_ped_aneel.jpg
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup, find_packages
2 | __version__ = '1.0.1'
3 |
4 | long_desc = """Deckparser
5 |
6 | O Deckparser é um pacote Python que implementa importadores de dados para os arquivos de entrada dos modelos oficiais do sistema elétrico brasileiro.
7 | Os modelos NEWAVE, DECOMP e DESSEM são os formatos contemplados. Estes modelos são produzidos pelo CEPEL Eletrobrás.
8 | O arquivo de dados é denominado deck, talvez por herança do nome utilizado para conjunto de cartões de entrada de dados de sistemas computacionais dos anos 70.
9 | Um deck consiste de um arquivo, geralmente zipado, que mantém arquivos de dados em formato binário e texto simples.
10 | Os formatos são definidos e estruturados considerando conceitos do Fortran para a definição de dados de entrada. Assim, para os arquivos de dados baseados em texto pode ser realizado o parsing simples para aquisição dos dados.
11 | Para os arquivos binários é utilizado o pacote Python Numpy, que suporta a abertura e leitura de arquivos de dados binários do Fortran.
12 | """
13 |
14 | if __name__ == '__main__':
15 | project_name = "deckparser"
16 | setup(
17 | name=project_name,
18 | version=__version__,
19 | author="Andre E Toscano",
20 | author_email="andre@venidera.com",
21 | description=("NEWAVE, DECOMP, DESSEM and SUISHI deck parser."),
22 | license="Apache 2.0",
23 | keywords="parser deck newave decomp dessem suishi",
24 | url="https://github.org/venidera/deckparser",
25 | packages=find_packages(),
26 | install_requires=['numpy', 'unidecode', 'chardet'],
27 | long_description=long_desc,
28 | classifiers=[
29 | "Development Status :: 5 - Production/Stable",
30 | "Intended Audience :: Developers",
31 | "Intended Audience :: System Administrators",
32 | "Operating System :: Unix",
33 | "Topic :: Utilities",
34 | ],
35 | test_suite=project_name + '.tests',
36 | scripts=['bin/decomp2json', 'bin/dessem2json'],
37 | package_data={'deckparser.importers.dessem.cfg': ['*.xml'],
38 | 'deckparser.importers.dessem.v2.cfg': ['*.xml'],
39 | 'deckparser.importers.dessem.out.cfg': ['*.json'],
40 | 'deckparser.importers.dessem.out.cfg.modif': ['*.json']},
41 | include_package_data=True
42 | )
43 |
--------------------------------------------------------------------------------