cluster.neuralnet package

Submodules

cluster.neuralnet.neuralnet_node module

class cluster.neuralnet.neuralnet_node.NeuralNetNode[source]

Bases: cluster.common.common_node.WorkFlowCommonNode

change_predict_fileList(filelist, dataconf)[source]
check_batch_exist(node_id)[source]

use if you want to check batch data exists or not check if batch version data exists :param node_id: :return:

eval(node_id, parm={})[source]
Parameters:
  • node_id
  • parm
Returns:

get_active_batch(node_id)[source]

find batch version for predict :param node_id: :return:

get_batch_img_data(data_set, type)[source]
get_before_make_batch(node_id, nn_batch_ver_id)[source]

find a batch version for eval, train :param node_id: :return:

get_eval_batch(node_id)[source]

find a batch version for eval, train :param node_id: :return:

get_input_data(feed_node, cls_pool, input_feed_name)[source]
make_batch(node_id)[source]

call this function for next version :param node_id: :return:

model_file_delete(model_path, modelname)[source]
predict(node_id, parm={})[source]
Parameters:
  • node_id
  • parm
Returns:

run(conf_data)[source]

call on train :param conf_data: :return:

save_accloss_info(result)[source]
set_predict_return_cnn_img(labels, logits, pred_cnt)[source]
spaceprint(val, cnt)[source]

cluster.neuralnet.neuralnet_node_attnseq2seq module

class cluster.neuralnet.neuralnet_node_attnseq2seq.NeuralNetNodeAttnSeq2Seq[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf, data=None, result=None)[source]

eval result wit test data :param node_id: :param parm: :return:

predict(node_id, parm={'num': 0, 'input_data': {}, 'clean_ans': True})[source]
Parameters:
  • node_id
  • parm
Returns:

run(conf_data)[source]

cluster.neuralnet.neuralnet_node_autoencoder module

class cluster.neuralnet.neuralnet_node_autoencoder.NeuralNetNodeAutoEncoder[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

this is a network class for Autoencoder Autoencoder provide two types of service 1. provide matrix size of input matrix 2. provide compressed matrix

anomaly_detection(node_id, parm={'input_data': {}, 'type': 'encoder'}, raw_flag=False)[source]

this is a function that judge requested data is out lier of not :param node_id: string :param parm: dict (include input data) :return: boolean

eval(node_id, conf_data, data=None, result=None, stand=0.1)[source]

eval process check if model works well (accuracy with cross table) :param node_id: :param conf_data: :param data: :param result: :return:

predict(node_id, parm={'input_data': {}, 'type': 'encoder'}, internal=False, raw_flag=False)[source]
Parameters:
  • node_id
  • parm
Returns:

run(conf_data)[source]

cluster.neuralnet.neuralnet_node_bilstmcrf module

class cluster.neuralnet.neuralnet_node_bilstmcrf.NeuralNetNodeBiLstmCrf[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode, cluster.common.neural_common_bilismcrf.BiLstmCommon

add_init_op()[source]
add_logits_op()[source]

Adds logits to self

add_loss_op()[source]

Adds loss to self

add_placeholders()[source]

Adds placeholders to self

add_pred_op()[source]

Adds labels_pred to self

add_summary(sess)[source]
add_train_op()[source]

Add train_op to self

add_word_embeddings_op()[source]

Adds word embeddings to self

build_graph()[source]
eval(node_id, conf_data, data=None, result=None, stand=0.1)[source]

eval process check if model works well (accuracy with cross table) :param node_id: :param conf_data: :param data: :param result: :return:

get_feed_dict(words, labels=None, lr=None, dropout=None)[source]

Given some data, pad it and build a feed dictionary Args:

words: list of sentences. A sentence is a list of ids of a list of words.
A word is a list of ids

labels: list of ids lr: (float) learning rate dropout: (float) keep prob

Returns:
dict {placeholder: value}
predict(node_id, parm={'input_data': {}})[source]

predict logic for ner tockenize input text and find matching tags for each value :param node_id: :param parm: :return:

predict_batch(sess, words)[source]
Args:
sess: a tensorflow session words: list of sentences
Returns:
labels_pred: list of labels for each sentence sequence_length
run(conf_data)[source]
run_epoch(sess, train, dev, tags, epoch)[source]

Performs one complete pass over the train set and evaluate on dev Args:

sess: tensorflow session train: dataset that yields tuple of sentences, tags dev: dataset tags: {tag: index} dictionary epoch: (int) number of the epoch
run_evaluate(sess, test, tags, result=None)[source]

Evaluates performance on test set Args:

sess: tensorflow session test: dataset that yields tuple of sentences, tags tags: {tag: index} dictionary
Returns:
accuracy f1 score
train(train, dev, tags, sess)[source]

Performs training with early stopping and lr exponential decay Args:

train: dataset that yields tuple of sentences, tags dev: dataset tags: {tag: index} dictionary

cluster.neuralnet.neuralnet_node_cnn module

class cluster.neuralnet.neuralnet_node_cnn.NeuralNetNodeCnn[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf_data, data=None, result=None)[source]
eval_print(labels, t_cnt_arr, f_cnt_arr)[source]
eval_run(sess, input_data)[source]
get_model_cnn(type=None)[source]
get_saver_model(sess)[source]
predict(node_id, filelist)[source]
run(conf_data)[source]
set_saver_model(sess)[source]
train_run_cnn(sess, input_data, test_data)[source]

cluster.neuralnet.neuralnet_node_d2v module

class cluster.neuralnet.neuralnet_node_d2v.NeuralNetNodeDoc2Vec[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, parm={})[source]
Parameters:
  • node_id
  • parm
Returns:

predict(node_id, parm={'type': 'vector', 'val_2': [], 'val_1': []})[source]

predict service _get_model_path 1. type (vector) : return vector 2. type (sim) : positive list & negative list :param node_id: :param parm: :return:

run(conf_data)[source]

cluster.neuralnet.neuralnet_node_fasttext module

class cluster.neuralnet.neuralnet_node_fasttext.NeuralNetNodeFastText[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

run(conf_data)[source]

cluster.neuralnet.neuralnet_node_kerasdnn module

class cluster.neuralnet.neuralnet_node_kerasdnn.History[source]

Bases: keras.callbacks.Callback

on_batch_end(batch, logs={})[source]
on_train_begin(logs={})[source]
class cluster.neuralnet.neuralnet_node_kerasdnn.NeuralNetNodeKerasdnn[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf_data, data=None, result=None)[source]
Parameters:
  • node_id
  • parm
Returns:

generator_len(it)[source]

Help for Generator length promote util class(?) :param it : python generator :return: length of generator

load_hdf5(data_path, dataframe)[source]

Load_hdf5 :param data_path: :return:data_path

predict(nn_id, conf_data, parm={})[source]
read_hdf5(filename)[source]
read_hdf5_chunk(filename)[source]
run(conf_data)[source]

cluster.neuralnet.neuralnet_node_residual module

class cluster.neuralnet.neuralnet_node_residual.History[source]

Bases: keras.callbacks.Callback

on_batch_end(batch, logs={})[source]
on_train_begin(logs={})[source]
class cluster.neuralnet.neuralnet_node_residual.NeuralNetNodeReNet[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf_data, data=None, result=None)[source]
eval_print(labels, t_cnt_arr, f_cnt_arr)[source]
eval_run(input_data)[source]
get_model_resnet()[source]
predict(node_id, filelist)[source]
run(conf_data)[source]
set_saver_model()[source]
train_run_resnet(input_data, test_data)[source]

cluster.neuralnet.neuralnet_node_rnn module

class cluster.neuralnet.neuralnet_node_rnn.NeuralNetNodeRnn[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, parm={})[source]
Parameters:
  • node_id
  • parm
Returns:

predict(node_id, parm={})[source]
run(conf_data)[source]

cluster.neuralnet.neuralnet_node_seq2seq module

class cluster.neuralnet.neuralnet_node_seq2seq.NeuralNetNodeSeq2Seq[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf, data=None, result=None)[source]
Parameters:
  • node_id
  • parm
Returns:

predict(node_id, parm={'num': 0, 'input_data': {}, 'clean_ans': True})[source]
Parameters:
  • node_id
  • parm
Returns:

run(conf_data)[source]

cluster.neuralnet.neuralnet_node_w2v module

class cluster.neuralnet.neuralnet_node_w2v.NeuralNetNodeWord2Vec[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf, data=None, result=None)[source]
Parameters:
  • node_id
  • parm
Returns:

predict(node_id, parm={'type': 'vector', 'val_2': [], 'val_1': []})[source]

predict service _get_model_path 1. type (vector) : return vector 2. type (sim) : positive list & negative list :param node_id: :param parm: :return:

run(conf_data)[source]

cluster.neuralnet.neuralnet_node_wcnn module

class cluster.neuralnet.neuralnet_node_wcnn.NeuralNetNodeWideCnn[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf_data, data=None, result=None)[source]

eval process check if model works well (accuracy with cross table) :param node_id: :param conf_data: :param data: :param result: :return:

get_model(netconf, type)[source]

create graph :param netconf: :param type: :return:

predict(node_id, parm={'num': 0, 'input_data': {}})[source]

predict result with pretrained model :param node_id: :param filelist: :return:

run(conf_data)[source]

run network train task :param conf_data: :return:

cluster.neuralnet.neuralnet_node_wdnn module

class cluster.neuralnet.neuralnet_node_wdnn.NeuralNetNodeWdnn[source]

Bases: cluster.neuralnet.neuralnet_node.NeuralNetNode

eval(node_id, conf_data, data=None, result=None)[source]
Parameters:
  • node_id
  • parm
Returns:

generator_len(it)[source]

Help for Generator length promote util class(?) :param it : python generator :return: length of generatorDataNodeFrame

load_hdf5(data_path, dataframe)[source]

Load_hdf5 :param data_path: :return:data_path

predict(node_id, ver, parm, data=None, result=None)[source]
Wdnn predict
batchlist info에서 active flag가 Y인 Model을 가져와서 예측을 함
Args:
params:
  • node_id
  • conf_data
Returns:
none

Raises:

Example

predict2(nn_id, conf_data, parm={})[source]
read_hdf5(filename)[source]
read_hdf5_chunk(filename)[source]
run(conf_data)[source]

cluster.neuralnet.resnet module

class cluster.neuralnet.resnet.ResnetBuilder[source]

Bases: object

static build(input_shape, num_outputs, block_fn, repetitions)[source]

Builds a custom ResNet like architecture.

Args:

input_shape: The input shape in the form (nb_channels, nb_rows, nb_cols) num_outputs: The number of outputs at final softmax layer block_fn: The block function to use. This is either basic_block or bottleneck.

The original paper used basic_block for layers < 50
repetitions: Number of repetitions of various block units.
At each block unit, the number of filters are doubled and the input size is halved
Returns:
The keras Model.
static build_resnet_101(input_shape, num_outputs)[source]
static build_resnet_152(input_shape, num_outputs)[source]
static build_resnet_18(input_shape, num_outputs)[source]
static build_resnet_34(input_shape, num_outputs)[source]
static build_resnet_50(input_shape, num_outputs)[source]
cluster.neuralnet.resnet.basic_block(filters, init_strides=(1, 1), is_first_block_of_first_layer=False)[source]

Basic 3 X 3 convolution blocks for use on resnets with layers <= 34. Follows improved proposed scheme in http://arxiv.org/pdf/1603.05027v2.pdf

cluster.neuralnet.resnet.bottleneck(filters, init_strides=(1, 1), is_first_block_of_first_layer=False)[source]

Bottleneck architecture for > 34 layer resnet. Follows improved proposed scheme in http://arxiv.org/pdf/1603.05027v2.pdf

Returns:
A final conv layer of filters * 4

Module contents