1 # Created by Octave 3.6.1, Mon May 21 19:48:13 2012 UTC <root@brouzouf>
13 # name: <cell-element>
17 -- Function File: cartprod (VARARGIN)
18 Computes the cartesian product of given column vectors ( row
19 vectors ). The vector elements are assumend to be numbers.
21 Alternatively the vectors can be specified by as a matrix, by its
24 To calculate the cartesian product of vectors, P = A x B x C x D
25 ... . Requires A, B, C, D be column vectors. The algorithm is
26 iteratively calcualte the products, ( ( (A x B ) x C ) x D ) x
44 # name: <cell-element>
48 Computes the cartesian product of given column vectors ( row vectors ).
52 # name: <cell-element>
59 # name: <cell-element>
63 -- Function File: LAMBDA = circulant_eig (V)
64 -- Function File: [VS, LAMBDA] = circulant_eig (V)
65 Fast, compact calculation of eigenvalues and eigenvectors of a
67 Given an N*1 vector V, return the eigenvalues LAMBDA and
68 optionally eigenvectors VS of the N*N circulant matrix C that has
71 Theoretically same as `eig(make_circulant_matrix(v))', but many
72 fewer computations; does not form C explicitly
74 Reference: Robert M. Gray, Toeplitz and Circulant Matrices: A
75 Review, Now Publishers, http://ee.stanford.edu/~gray/toeplitz.pdf,
78 See also: circulant_make_matrix, circulant_matrix_vector_product,
85 # name: <cell-element>
89 Fast, compact calculation of eigenvalues and eigenvectors of a
95 # name: <cell-element>
102 # name: <cell-element>
106 -- Function File: C = circulant_inv (V)
107 Fast, compact calculation of inverse of a circulant matrix
108 Given an N*1 vector V, return the inverse C of the N*N circulant
109 matrix C that has V as its first column The returned C is the
110 first column of the inverse, which is also circulant - to get the
111 full matrix, use `circulant_make_matrix(c)'
113 Theoretically same as `inv(make_circulant_matrix(v))(:, 1)', but
114 requires many fewer computations and does not form matrices
117 Roundoff may induce a small imaginary component in C even if V is
118 real - use `real(c)' to remedy this
120 Reference: Robert M. Gray, Toeplitz and Circulant Matrices: A
121 Review, Now Publishers, http://ee.stanford.edu/~gray/toeplitz.pdf,
124 See also: circulant_make_matrix, circulant_matrix_vector_product,
131 # name: <cell-element>
135 Fast, compact calculation of inverse of a circulant matrix
136 Given an N*1 vector V
140 # name: <cell-element>
144 circulant_make_matrix
147 # name: <cell-element>
151 -- Function File: C = circulant_make_matrix (V)
152 Produce a full circulant matrix given the first column
153 Given an N*1 vector V, returns the N*N circulant matrix C where V
154 is the left column and all other columns are downshifted versions
157 Note: If the first row R of a circulant matrix is given, the first
158 column V can be obtained as `v = r([1 end:-1:2])'
160 Reference: Gene H. Golub and Charles F. Van Loan, Matrix
161 Computations, 3rd Ed., Section 4.7.7
163 See also: circulant_matrix_vector_product, circulant_eig,
170 # name: <cell-element>
174 Produce a full circulant matrix given the first column
175 Given an N*1 vector V, re
179 # name: <cell-element>
183 circulant_matrix_vector_product
186 # name: <cell-element>
190 -- Function File: Y = circulant_matrix_vector_product (V, X)
191 Fast, compact calculation of the product of a circulant matrix
193 Given N*1 vectors V and X, return the matrix-vector product Y =
194 CX, where C is the N*N circulant matrix that has V as its first
197 Theoretically the same as `make_circulant_matrix(x) * v', but does
198 not form C explicitly; uses the discrete Fourier transform
200 Because of roundoff, the returned Y may have a small imaginary
201 component even if V and X are real (use `real(y)' to remedy this)
203 Reference: Gene H. Golub and Charles F. Van Loan, Matrix
204 Computations, 3rd Ed., Section 4.7.7
206 See also: circulant_make_matrix, circulant_eig, circulant_inv
212 # name: <cell-element>
216 Fast, compact calculation of the product of a circulant matrix with a
222 # name: <cell-element>
229 # name: <cell-element>
233 -- Function File: [Q, R, Z] = cod (A)
234 -- Function File: [Q, R, Z, P] = cod (A)
235 -- Function File: [...] = cod (A, '0')
236 Computes the complete orthogonal decomposition (COD) of the matrix
239 Let A be an M-by-N matrix, and let `K = min(M, N)'. Then Q is
240 M-by-M orthogonal, Z is N-by-N orthogonal, and R is M-by-N such
241 that `R(:,1:K)' is upper trapezoidal and `R(:,K+1:N)' is zero.
242 The additional P output argument specifies that pivoting should be
243 used in the first step (QR decomposition). In this case,
245 If a second argument of '0' is given, an economy-sized
246 factorization is returned so that R is K-by-K.
248 _NOTE_: This is currently implemented by double QR factorization
249 plus some tricky manipulations, and is not as efficient as using
258 # name: <cell-element>
262 Computes the complete orthogonal decomposition (COD) of the matrix A:
267 # name: <cell-element>
274 # name: <cell-element>
278 -- Function File: C = condeig (A)
279 -- Function File: [V, LAMBDA, C] = condeig (A)
280 Compute condition numbers of the eigenvalues of a matrix. The
281 condition numbers are the reciprocals of the cosines of the angles
282 between the left and right eigenvectors.
287 * A must be a square numeric matrix.
292 * C is a vector of condition numbers of the eigenvalue of A.
294 * V is the matrix of right eigenvectors of A. The result is the
295 same as for `[v, lambda] = eig (a)'.
297 * LAMBDA is the diagonal matrix of eigenvalues of A. The result
298 is the same as for `[v, lambda] = eig (a)'.
310 # name: <cell-element>
314 Compute condition numbers of the eigenvalues of a matrix.
318 # name: <cell-element>
325 # name: <cell-element>
329 -- Function File: B = funm (A, F)
330 Compute matrix equivalent of function F; F can be a function name
331 or a function handle.
333 For trigonometric and hyperbolic functions, `thfm' is automatically
334 invoked as that is based on `expm' and diagonalization is avoided.
335 For other functions diagonalization is invoked, which implies that
336 -depending on the properties of input matrix A- the results can be
337 very inaccurate _without any warning_. For easy diagonizable and
338 stable matrices results of funm will be sufficiently accurate.
340 Note that you should not use funm for 'sqrt', 'log' or 'exp';
341 instead use sqrtm, logm and expm as these are more robust.
346 (Compute matrix equivalent of sin() )
348 function bk1 = besselk1 (x)
351 B = funm (A, besselk1);
352 (Compute matrix equivalent of bessel function K1(); a helper function
353 is needed here to convey extra args for besselk() )
355 See also: thfm, expm, logm, sqrtm
361 # name: <cell-element>
365 Compute matrix equivalent of function F; F can be a function name or a
370 # name: <cell-element>
377 # name: <cell-element>
381 -- Function File: [BLOCKVECTORX, LAMBDA] = lobpcg (BLOCKVECTORX,
383 -- Function File: [BLOCKVECTORX, LAMBDA, FAILUREFLAG] = lobpcg
384 (BLOCKVECTORX, OPERATORA)
385 -- Function File: [BLOCKVECTORX, LAMBDA, FAILUREFLAG, LAMBDAHISTORY,
386 RESIDUALNORMSHISTORY] = lobpcg (BLOCKVECTORX, OPERATORA, OPERATORB,
387 OPERATORT, BLOCKVECTORY, RESIDUALTOLERANCE, MAXITERATIONS,
389 Solves Hermitian partial eigenproblems using preconditioning.
391 The first form outputs the array of algebraic smallest eigenvalues
392 LAMBDA and corresponding matrix of orthonormalized eigenvectors
393 BLOCKVECTORX of the Hermitian (full or sparse) operator OPERATORA
394 using input matrix BLOCKVECTORX as an initial guess, without
395 preconditioning, somewhat similar to:
397 # for real symmetric operator operatorA
398 opts.issym = 1; opts.isreal = 1; K = size (blockVectorX, 2);
399 [blockVectorX, lambda] = eigs (operatorA, K, 'SR', opts);
401 # for Hermitian operator operatorA
402 K = size (blockVectorX, 2);
403 [blockVectorX, lambda] = eigs (operatorA, K, 'SR');
405 The second form returns a convergence flag. If FAILUREFLAG is 0
406 then all the eigenvalues converged; otherwise not all converged.
408 The third form computes smallest eigenvalues LAMBDA and
409 corresponding eigenvectors BLOCKVECTORX of the generalized
410 eigenproblem Ax=lambda Bx, where Hermitian operators OPERATORA and
411 OPERATORB are given as functions, as well as a preconditioner,
412 OPERATORT. The operators OPERATORB and OPERATORT must be in
413 addition _positive definite_. To compute the largest eigenpairs of
414 OPERATORA, simply apply the code to OPERATORA multiplied by -1.
415 The code does not involve _any_ matrix factorizations of OPERATORA
416 and OPERATORB, thus, e.g., it preserves the sparsity and the
417 structure of OPERATORA and OPERATORB.
419 RESIDUALTOLERANCE and MAXITERATIONS control tolerance and max
420 number of steps, and VERBOSITYLEVEL = 0, 1, or 2 controls the
421 amount of printed info. LAMBDAHISTORY is a matrix with all
422 iterative lambdas, and RESIDUALNORMSHISTORY are matrices of the
423 history of 2-norms of residuals
426 * BLOCKVECTORX (class numeric) - initial approximation to
427 eigenvectors, full or sparse matrix n-by-blockSize.
428 BLOCKVECTORX must be full rank.
430 * OPERATORA (class numeric, char, or function_handle) - the
431 main operator of the eigenproblem, can be a matrix, a
432 function name, or handle
434 Optional function input:
435 * OPERATORB (class numeric, char, or function_handle) - the
436 second operator, if solving a generalized eigenproblem, can
437 be a matrix, a function name, or handle; by default if empty,
440 * OPERATORT (class char or function_handle) - the
441 preconditioner, by default `operatorT(blockVectorX) =
444 Optional constraints input:
445 * BLOCKVECTORY (class numeric) - a full or sparse n-by-sizeY
446 matrix of constraints, where sizeY < n. BLOCKVECTORY must be
447 full rank. The iterations will be performed in the
448 (operatorB-) orthogonal complement of the column-space of
451 Optional scalar input parameters:
452 * RESIDUALTOLERANCE (class numeric) - tolerance, by default,
453 `residualTolerance = n * sqrt (eps)'
455 * MAXITERATIONS - max number of iterations, by default,
456 `maxIterations = min (n, 20)'
458 * VERBOSITYLEVEL - either 0 (no info), 1, or 2 (with pictures);
459 by default, `verbosityLevel = 0'.
462 * BLOCKVECTORX and LAMBDA (class numeric) both are computed
463 blockSize eigenpairs, where `blockSize = size (blockVectorX,
464 2)' for the initial guess BLOCKVECTORX if it is full rank.
467 * FAILUREFLAG (class integer) as described above.
469 * LAMBDAHISTORY (class numeric) as described above.
471 * RESIDUALNORMSHISTORY (class numeric) as described above.
473 Functions `operatorA(blockVectorX)', `operatorB(blockVectorX)' and
474 `operatorT(blockVectorX)' must support BLOCKVECTORX being a
475 matrix, not just a column vector.
477 Every iteration involves one application of OPERATORA and
478 OPERATORB, and one of OPERATORT.
480 Main memory requirements: 6 (9 if `isempty(operatorB)=0') matrices
481 of the same size as BLOCKVECTORX, 2 matrices of the same size as
482 BLOCKVECTORY (if present), and two square matrices of the size
485 In all examples below, we use the Laplacian operator in a 20x20
486 square with the mesh size 1 which can be generated in MATLAB by
488 A = delsq (numgrid ('S', 21));
491 or in MATLAB and Octave by:
492 [~,~,A] = laplacian ([19, 19]);
495 Note that `laplacian' is a function of the specfun octave-forge
498 The following Example:
499 [blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, 1e-5, 50, 2);
501 attempts to compute 8 first eigenpairs without preconditioning,
502 but not all eigenpairs converge after 50 steps, so failureFlag=1.
508 [blockVectorX, lambda] = lobpcg (randn (n, 2), A, blockVectorY, 1e-5, 200, 2);
509 blockVectorY = [blockVectorY, blockVectorX];
510 lambda_all = [lambda_all' lambda']';
514 attemps to compute the same 8 eigenpairs by calling the code 4
515 times with blockSize=2 using orthogonalization to the previously
516 founded eigenvectors.
518 The following Example:
519 R = ichol (A, struct('michol', 'on'));
520 precfun = @(x)R\(R'\x);
521 [blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, [], @(x)precfun(x), 1e-5, 60, 2);
523 computes the same eigenpairs in less then 25 steps, so that
524 failureFlag=0 using the preconditioner function `precfun', defined
525 inline. If `precfun' is defined as an octave function in a file,
526 the function handle `@(x)precfun(x)' can be equivalently replaced
527 by the function name `precfun'. Running:
529 [blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, speye (n), @(x)precfun(x), 1e-5, 50, 2);
531 produces similar answers, but is somewhat slower and needs more
532 memory as technically a generalized eigenproblem with B=I is
535 The following example for a mostly diagonally dominant sparse
536 matrix A demonstrates different types of preconditioning, compared
537 to the standard use of the main diagonal of A:
539 clear all; close all;
541 M = spdiags ([1:n]', 0, n, n);
543 A = M + sprandsym (n, .1);
547 [~,~,~,~,rnp] = lobpcg (Xini, A, tol, maxiter, 1);
548 [~,~,~,~,r] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
549 subplot (2,2,1), semilogy (r'); hold on;
550 semilogy (rnp', ':>');
551 title ('No preconditioning (top)'); axis tight;
553 precfun = @(x)M\x; % M is no longer symmetric
554 [~,~,~,~,rns] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
555 subplot (2,2,2), semilogy (r'); hold on;
556 semilogy (rns', '--s');
557 title ('Nonsymmetric preconditioning (square)'); axis tight;
559 precfun = @(x)M\(x+10*sin(x)); % nonlinear preconditioning
560 [~,~,~,~,rnl] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
561 subplot (2,2,3), semilogy (r'); hold on;
562 semilogy (rnl', '-.*');
563 title ('Nonlinear preconditioning (star)'); axis tight;
564 M = abs (M - 3.5 * speye (n, n));
566 [~,~,~,~,rs] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
567 subplot (2,2,4), semilogy (r'); hold on;
568 semilogy (rs', '-d');
569 title ('Selective preconditioning (diamond)'); axis tight;
574 This main function `lobpcg' is a version of the preconditioned
575 conjugate gradient method (Algorithm 5.1) described in A. V. Knyazev,
576 Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block
577 Preconditioned Conjugate Gradient Method, SIAM Journal on Scientific
578 Computing 23 (2001), no. 2, pp. 517-541.
579 `http://dx.doi.org/10.1137/S1064827500366124'
584 * an excessively small requested tolerance may result in often
585 restarts and instability. The code is not written to produce
586 an eps-level accuracy! Use common sense.
588 * the code may be very sensitive to the number of eigenpairs
589 computed, if there is a cluster of eigenvalues not completely
591 operatorA = diag ([1 1.99 2:99]);
592 [blockVectorX, lambda] = lobpcg (randn (100, 1),operatorA, 1e-10, 80, 2);
593 [blockVectorX, lambda] = lobpcg (randn (100, 2),operatorA, 1e-10, 80, 2);
594 [blockVectorX, lambda] = lobpcg (randn (100, 3),operatorA, 1e-10, 80, 2);
599 The main distribution site: `http://math.ucdenver.edu/~aknyazev/'
601 A C-version of this code is a part of the
602 `http://code.google.com/p/blopex/' package and is directly available,
603 e.g., in PETSc and HYPRE.
608 # name: <cell-element>
612 Solves Hermitian partial eigenproblems using preconditioning.
616 # name: <cell-element>
623 # name: <cell-element>
627 -- Function File: Y = ndcovlt (X, T1, T2, ...)
628 Computes an n-dimensional covariant linear transform of an n-d
629 tensor, given a transformation matrix for each dimension. The
630 number of columns of each transformation matrix must match the
631 corresponding extent of X, and the number of rows determines the
632 corresponding extent of Y. For example:
634 size (X, 2) == columns (T2)
635 size (Y, 2) == rows (T2)
637 The element `Y(i1, i2, ...)' is defined as a sum of
639 X(j1, j2, ...) * T1(i1, j1) * T2(i2, j2) * ...
641 over all j1, j2, .... For two dimensions, this reduces to
644 [] passed as a transformation matrix is converted to identity
645 matrix for the corresponding dimension.
651 # name: <cell-element>
655 Computes an n-dimensional covariant linear transform of an n-d tensor,
660 # name: <cell-element>
667 # name: <cell-element>
671 -- Function File: [W, H, ITER, HIS] = nmf_bpas (A, K)
672 Nonnegative Matrix Factorization by Alternating Nonnegativity
673 Constrained Least Squares using Block Principal Pivoting/Active
676 This function solves one the following problems: given A and K,
677 find W and H such that (1) minimize 1/2 * || A-WH ||_F^2
678 (2) minimize 1/2 * ( || A-WH ||_F^2 + alpha * || W ||_F^2 + beta *
679 || H ||_F^2 ) (3) minimize 1/2 * ( || A-WH ||_F^2 + alpha * ||
680 W ||_F^2 + beta * (sum_(i=1)^n || H(:,i) ||_1^2 ) ) where W>=0
681 and H>=0 elementwise. The input arguments are A : Input data
682 matrix (m x n) and K : Target low-rank.
685 `Type : Default is 'regularized', which is recommended for quick application testing unless 'sparse' or 'plain' is explicitly needed. If sparsity is needed for 'W' factor, then apply this function for the transpose of 'A' with formulation (3). Then, exchange 'W' and 'H' and obtain the transpose of them. Imposing sparsity for both factors is not recommended and thus not included in this software.'
687 'plain' to use formulation (1)
689 'regularized' to use formulation (2)
691 'sparse' to use formulation (3)
693 `NNLSSolver : Default is 'bp', which is in general faster.'
694 item 'bp' to use the algorithm in [1] item 'as' to use
697 `Alpha : Parameter alpha in the formulation (2) or (3). Default is the average of all elements in A. No good justfication for this default value, and you might want to try other values.'
699 `Beta : Parameter beta in the formulation (2) or (3).'
700 Default is the average of all elements in A. No good
701 justfication for this default value, and you might want to
704 `MaxIter : Maximum number of iterations. Default is 100.'
706 `MinIter : Minimum number of iterations. Default is 20.'
708 `MaxTime : Maximum amount of time in seconds. Default is 100,000.'
710 `Winit : (m x k) initial value for W.'
712 `Hinit : (k x n) initial value for H.'
714 `Tol : Stopping tolerance. Default is 1e-3. If you want to obtain a more accurate solution, decrease TOL and increase MAX_ITER at the same time.'
718 0 (default) - No debugging information is collected.
720 1 (debugging purpose) - History of computation is returned by 'HIS' variable.
722 2 (debugging purpose) - History of computation is additionally printed on screen.
725 `W : Obtained basis matrix (m x k)'
727 `H : Obtained coefficients matrix (k x n)'
729 `iter : Number of iterations'
731 `HIS : (debugging purpose) History of computation'
735 nmf(A,20,'verbose',2)
736 nmf(A,30,'verbose',2,'nnls_solver','as')
737 nmf(A,5,'verbose',2,'type','sparse')
738 nmf(A,60,'verbose',1,'type','plain','w_init',rand(m,k))
739 nmf(A,70,'verbose',2,'type','sparse','nnls_solver','bp','alpha',1.1,'beta',1.3)
741 References: [1] For using this software, please cite:
742 Jingu Kim and Haesun Park, Toward Faster Nonnegative Matrix
743 Factorization: A New Algorithm and Comparisons,
744 In Proceedings of the 2008 Eighth IEEE International Conference on
745 Data Mining (ICDM'08), 353-362, 2008
746 [2] If you use 'nnls_solver'='as' (see below), please cite:
747 Hyunsoo Kim and Haesun Park, Nonnegative Matrix Factorization
749 on Alternating Nonnegativity Constrained Least Squares and Active
751 SIAM Journal on Matrix Analysis and Applications, 2008, 30, 713-730
753 Check original code at `http://www.cc.gatech.edu/~jingu'
761 # name: <cell-element>
765 Nonnegative Matrix Factorization by Alternating Nonnegativity
770 # name: <cell-element>
777 # name: <cell-element>
781 -- Function File: [W, H] = nmf_pg (V, WINIT, HINIT, TOL, TIMELIMIT,
783 Non-negative matrix factorization by alternative non-negative
784 least squares using projected gradients.
786 The matrix V is factorized into two possitive matrices W and H
787 such that `V = W*H + U'. Where U is a matrix of residuals that can
788 be negative or positive. When the matrix V is positive the order
789 of the elements in U is bounded by the optional named argument TOL
790 (default value `1e-9').
792 The factorization is not unique and depends on the inital guess
793 for the matrices W and H. You can pass this initalizations using
794 the optional named arguments WINIT and HINIT.
796 timelimit, maxiter: limit of time and iterations
801 [W H] = nmf_pg(A,tol=1e-3);
809 # name: <cell-element>
813 Non-negative matrix factorization by alternative non-negative least
818 # name: <cell-element>
825 # name: <cell-element>
829 -- Function File: [VSTACKED, ASTACKED] = rotparams (RSTACKED)
830 The function w = rotparams (r) - Inverse to rotv().
831 Using, W = rotparams(R) is such that rotv(w)*r' == eye(3).
833 If used as, [v,a]=rotparams(r) , idem, with v (1 x 3) s.t. w ==
836 0 <= norm(w)==a <= pi
838 :-O !! Does not check if 'r' is a rotation matrix.
840 Ignores matrices with zero rows or with NaNs. (returns 0 for them)
848 # name: <cell-element>
852 The function w = rotparams (r) - Inverse to rotv().
856 # name: <cell-element>
863 # name: <cell-element>
867 -- Function File: R = rotv ( v, ang )
868 The functionrotv calculates a Matrix of rotation about V w/ angle
869 |v| r = rotv(v [,ang])
871 Returns the rotation matrix w/ axis v, and angle, in radians,
872 norm(v) or ang (if present).
874 rotv(v) == w'*w + cos(a) * (eye(3)-w'*w) - sin(a) * crossmat(w)
876 where a = norm (v) and w = v/a.
878 v and ang may be vertically stacked : If 'v' is 2x3, then rotv( v
879 ) == [rotv(v(1,:)); rotv(v(2,:))]
882 See also: rotparams, rota, rot
888 # name: <cell-element>
892 The functionrotv calculates a Matrix of rotation about V w/ angle |v| r
897 # name: <cell-element>
904 # name: <cell-element>
908 -- Function File: X = smwsolve (A, U, V, B)
909 -- Function File: smwsolve (SOLVER, U, V, B)
910 Solves the square system `(A + U*V')*X == B', where U and V are
911 matrices with several columns, using the Sherman-Morrison-Woodbury
912 formula, so that a system with A as left-hand side is actually
913 solved. This is especially advantageous if A is diagonal, sparse,
914 triangular or positive definite. A can be sparse or full, the
915 other matrices are expected to be full. Instead of a matrix A, a
916 user may alternatively provide a function SOLVER that performs the
917 left division operation.
922 # name: <cell-element>
926 Solves the square system `(A + U*V')*X == B', where U and V are
931 # name: <cell-element>
938 # name: <cell-element>
942 -- Function File: Y = thfm (X, MODE)
943 Trigonometric/hyperbolic functions of square matrix X.
945 MODE must be the name of a function. Valid functions are 'sin',
946 'cos', 'tan', 'sec', 'csc', 'cot' and all their inverses and/or
947 hyperbolic variants, and 'sqrt', 'log' and 'exp'.
949 The code `thfm (x, 'cos')' calculates matrix cosinus _even if_
950 input matrix X is _not_ diagonalizable.
952 _Important note_: This algorithm does _not_ use an eigensystem
953 similarity transformation. It maps the MODE functions to functions
954 of `expm', `logm' and `sqrtm', which are known to be robust with
955 respect to non-diagonalizable ('defective') X.
963 # name: <cell-element>
967 Trigonometric/hyperbolic functions of square matrix X.