1 # Created by Octave 3.6.1, Sun Apr 01 17:24:32 2012 UTC <root@t61>
13 # name: <cell-element>
17 -- Function File: [A = dhardlim (N)
22 # name: <cell-element>
30 # name: <cell-element>
37 # name: <cell-element>
42 TRAINVECTORS,VALIDATIONVECTORS,TESTVECTORS,INDEXOFTRAIN,INDEXOFVALIDATION,INDEXOFTEST]
43 = dividerand (ALLCASES,TRAINRATIO,VALRATIO,TESTRATIO)
44 Divide the vectors in training, validation and test group
45 according to the informed ratios
48 [trainVectors,validationVectors,testVectors,indexOfTrain,indexOfValidatio
49 n,indexOfTest] = dividerand(allCases,trainRatio,valRatio,testRatio)
51 The ratios are normalized. This way:
53 dividerand(xx,1,2,3) == dividerand(xx,10,20,30)
59 # name: <cell-element>
63 Divide the vectors in training, validation and test group according to
68 # name: <cell-element>
75 # name: <cell-element>
79 -- Function File: A= poslin (N)
80 `poslin' is a positive linear transfer function used by neural
86 # name: <cell-element>
90 `poslin' is a positive linear transfer function used by neural networks
95 # name: <cell-element>
102 # name: <cell-element>
106 -- Function File: [A = dsatlin (N)
111 # name: <cell-element>
119 # name: <cell-element>
126 # name: <cell-element>
130 -- Function File: [A = satlins (N)
131 A neural feed-forward network will be trained with `trainlm'
137 # name: <cell-element>
141 A neural feed-forward network will be trained with `trainlm'
146 # name: <cell-element>
153 # name: <cell-element>
157 -- Function File: [A = hardlim (N)
162 # name: <cell-element>
170 # name: <cell-element>
177 # name: <cell-element>
181 -- Function File: [A = hardlims (N)
186 # name: <cell-element>
194 # name: <cell-element>
201 # name: <cell-element>
205 -- Function File: VEC = ind2vec (IND)
206 `vec2ind' convert indices to vector
209 vec = [1 2 3; 4 5 6; 7 8 9];
212 The prompt output will be:
220 # name: <cell-element>
224 `vec2ind' convert indices to vector
229 # name: <cell-element>
236 # name: <cell-element>
240 -- Function File: F = isposint(N)
241 `isposint' returns true for positive integer values.
243 isposint(1) # this returns TRUE
244 isposint(0.5) # this returns FALSE
245 isposint(0) # this also return FALSE
246 isposint(-1) # this also returns FALSE
252 # name: <cell-element>
256 `isposint' returns true for positive integer values.
260 # name: <cell-element>
267 # name: <cell-element>
271 -- Function File: A = logsig (N)
272 `logsig' is a non-linear transfer function used to train neural
273 networks. This function can be used in newff(...) to create a new
274 feed forward multi-layer neural network.
280 # name: <cell-element>
284 `logsig' is a non-linear transfer function used to train neural
289 # name: <cell-element>
296 # name: <cell-element>
300 -- Function File: [ YY,PS] = mapstd (XX,YMEAN,YSTD)
301 Map values to mean 0 and standard derivation to 1.
303 [YY,PS] = mapstd(XX,ymean,ystd)
305 Apply the conversion and returns YY as (YY-ymean)/ystd.
307 [YY,PS] = mapstd(XX,FP)
309 Apply the conversion but using an struct to inform target mean/stddev.
310 This is the same of [YY,PS]=mapstd(XX,FP.ymean, FP.ystd).
312 YY = mapstd('apply',XX,PS)
314 Reapply the conversion based on a previous operation data.
315 PS stores the mean and stddev of the first XX used.
317 XX = mapstd('reverse',YY,PS)
319 Reverse a conversion of a previous applied operation.
321 dx_dy = mapstd('dx',XX,YY,PS)
323 Returns the derivative of Y with respect to X.
325 dx_dy = mapstd('dx',XX,[],PS)
327 Returns the derivative (less efficient).
329 name = mapstd('name');
331 Returns the name of this convesion process.
333 FP = mapstd('pdefaults');
335 Returns the default process parameters.
337 names = mapstd('pnames');
339 Returns the description of the process parameters.
343 Raises an error if FP has some inconsistent.
349 # name: <cell-element>
353 Map values to mean 0 and standard derivation to 1.
357 # name: <cell-element>
364 # name: <cell-element>
368 -- Function File: PR = min_max (PP)
369 `min_max' returns variable Pr with range of matrix rows
371 PR - R x 2 matrix of min and max values for R input elements
373 Pp = [1 2 3; -1 -0.5 -3]
380 # name: <cell-element>
384 `min_max' returns variable Pr with range of matrix rows
389 # name: <cell-element>
396 # name: <cell-element>
400 -- Function File: NET = newff (PR,SS,TRF,BTF,BLF,PF)
401 `newff' create a feed-forward backpropagation network
403 Pr - R x 2 matrix of min and max values for R input elements
404 Ss - 1 x Ni row vector with size of ith layer, for N layers
405 trf - 1 x Ni list with transfer function of ith layer,
407 btf - Batch network training function,
409 blf - Batch weight/bias learning function,
411 pf - Performance function,
415 Pr = [0.1 0.8; 0.1 0.75; 0.01 0.8];
416 it's a 3 x 2 matrix, this means 3 input neurons
418 net = newff(Pr, [4 1], {"tansig","purelin"}, "trainlm", "learngdm", "mse");
424 # name: <cell-element>
428 `newff' create a feed-forward backpropagation network
433 # name: <cell-element>
440 # name: <cell-element>
444 -- Function File: NET = newp (PR,SS,TRANSFUNC,LEARNFUNC)
445 `newp' create a perceptron
447 PLEASE DON'T USE THIS FUNCTIONS, IT'S STILL NOT FINISHED!
448 =========================================================
450 Pr - R x 2 matrix of min and max values for R input elements
451 ss - a scalar value with the number of neurons
452 transFunc - a string with the transfer function
454 learnFunc - a string with the learning function
461 # name: <cell-element>
465 `newp' create a perceptron
470 # name: <cell-element>
477 # name: <cell-element>
481 -- Function File: A= poslin (N)
482 `poslin' is a positive linear transfer function used by neural
488 # name: <cell-element>
492 `poslin' is a positive linear transfer function used by neural networks
497 # name: <cell-element>
504 # name: <cell-element>
508 -- Function File: [PP,TT] = poststd(PN,MEANP,,STDP,TN,MEANT,STDT)
509 `poststd' postprocesses the data which has been preprocessed by
515 # name: <cell-element>
519 `poststd' postprocesses the data which has been preprocessed by
524 # name: <cell-element>
531 # name: <cell-element>
535 -- Function File: [PN,MEANP,STDP,TN,MEANT,STDT] =prestd(P,T)
536 `prestd' preprocesses the data so that the mean is 0 and the
537 standard deviation is 1.
542 # name: <cell-element>
546 `prestd' preprocesses the data so that the mean is 0 and the standard
551 # name: <cell-element>
558 # name: <cell-element>
562 -- Function File: A= purelin (N)
563 `purelin' is a linear transfer function used by neural networks
568 # name: <cell-element>
572 `purelin' is a linear transfer function used by neural networks
577 # name: <cell-element>
584 # name: <cell-element>
588 -- Function File: radbas (N)
589 Radial basis transfer function.
591 `radbas(n) = exp(-n^2)'
597 # name: <cell-element>
601 Radial basis transfer function.
605 # name: <cell-element>
612 # name: <cell-element>
616 -- Function File: [A = satlin (N)
617 A neural feed-forward network will be trained with `trainlm'
623 # name: <cell-element>
627 A neural feed-forward network will be trained with `trainlm'
632 # name: <cell-element>
639 # name: <cell-element>
643 -- Function File: [A = satlins (N)
644 A neural feed-forward network will be trained with `trainlm'
650 # name: <cell-element>
654 A neural feed-forward network will be trained with `trainlm'
659 # name: <cell-element>
666 # name: <cell-element>
670 -- Function File: saveMLPStruct (NET,STRFILENAME)
671 `saveStruct' saves a neural network structure to *.txt files
676 # name: <cell-element>
680 `saveStruct' saves a neural network structure to *.
684 # name: <cell-element>
691 # name: <cell-element>
695 -- Function File: NETOUTPUT = sim (NET, MINPUT)
696 `sim' is usuable to simulate a before defined neural network.
697 `net' is created with newff(...) and MINPUT should be the
698 corresponding input data set!
703 # name: <cell-element>
707 `sim' is usuable to simulate a before defined neural network.
711 # name: <cell-element>
718 # name: <cell-element>
722 -- Function File: [MTRAIN, MTEST, MVALI] = subset
723 (MDATA,NTARGETS,IOPTI,FTEST,FVALI)
724 `subset' splits the main data matrix which contains inputs and
725 targets into 2 or 3 subsets depending on the parameters.
727 The first parameter MDATA must be in row order. This means if the
728 network contains three inputs, the matrix must be have 3 rows and
729 x columns to define the data for the inputs. And some more rows
730 for the outputs (targets), e.g. a neural network with three inputs
731 and two outputs must have 5 rows with x columns! The second
732 parameter NTARGETS defines the number or rows which contains the
733 target values! The third argument `iOpti' is optional and can
734 have three status: 0: no optimization 1: will
735 randomise the column order and order the columns containing min
736 and max values to be in the train set 2: will NOT randomise
737 the column order, but order the columns containing min and max
738 values to be in the train set default value is `1' The
739 fourth argument `fTest' is also optional and defines how much data
740 sets will be in the test set. Default value is `1/3' The fifth
741 parameter `fTrain' is also optional and defines how much data sets
742 will be in the train set. Default value is `1/6' So we have 50% of
743 all data sets which are for training with the default values.
745 [mTrain, mTest] = subset(mData,1)
746 returns three subsets of the complete matrix
747 with randomized and optimized columns!
749 [mTrain, mTest] = subset(mData,1,)
756 # name: <cell-element>
760 `subset' splits the main data matrix which contains inputs and targets
765 # name: <cell-element>
772 # name: <cell-element>
776 -- Function File: A = tansig (N)
777 `tansig' is a non-linear transfer function used to train neural
778 networks. This function can be used in newff(...) to create a new
779 feed forward multi-layer neural network.
785 # name: <cell-element>
789 `tansig' is a non-linear transfer function used to train neural
794 # name: <cell-element>
801 # name: <cell-element>
805 -- Function File: [NET] = train (MLPNET,MINPUTN,MOUTPUT,[],[],VV)
806 A neural feed-forward network will be trained with `train'
808 [net,tr,out,E] = train(MLPnet,mInputN,mOutput,[],[],VV);
811 net: the trained network of the net structure `MLPnet'
813 right side arguments:
814 MLPnet : the untrained network, created with `newff'
815 mInputN: normalized input matrix
816 mOutput: output matrix (normalized or not)
817 [] : unused parameter
818 [] : unused parameter
819 VV : validize structure
824 # name: <cell-element>
828 A neural feed-forward network will be trained with `train'
833 # name: <cell-element>
840 # name: <cell-element>
844 -- Function File: PN = trastd (P,MEANP,STDP)
845 `trastd' preprocess additional data for neural network simulation.
847 `p' : test input data
848 `meanp': vector with standardization parameters of prestd(...)
849 `stdp' : vector with standardization parameters of prestd(...)
852 stdp = [1.2910; 1.2910];
855 pn = trastd(p,meanp,stdp);
861 # name: <cell-element>
865 `trastd' preprocess additional data for neural network simulation.
869 # name: <cell-element>
876 # name: <cell-element>
880 -- Function File: IND = vec2ind (VECTOR)
881 `vec2ind' convert vectors to indices
884 vec = [1 2 3; 4 5 6; 7 8 9];
887 The prompt output will be:
895 # name: <cell-element>
899 `vec2ind' convert vectors to indices