Program
Sunday, July 22
Registrations opening at 9:00
Time  Tutorials  Amphi H  

9:00 – 9:30  Coffee break  room H101  
9:30 – 12:00  Viktor Levandovskyy  Elements of ComputerAlgebraic Analysis  
12:00 – 13:30  Lunch break  L'entracte 770 avenue Centrale 

13:30 – 16:00  Pascal Koiran  Upper bounds on real roots and lower bounds for the permanent  
16:00 – 16:20  Coffee break  room H101  
16:20 – 18:50  Seth Sullivant  Algebraic Statistics 
Monday, July 23
Registrations opening at 8:15
Time  Amphi A010  Amphi A022 

8:45 – 9:00  Opening remarks  
1A  Solutions of differential equations  1B Polynomial and linear system solving, modular arithmetic  
9:00 – 9:25  Alin Bostan, Muhammad F. I. Chowdhury, Romain Lebreton, Bruno Salvy and Éric Schost. Power Series Solutions of Singular (q)−Differential Equations  Luk Bettale, JeanCharles Faugère and Ludovic Perret. Solving Polynomial Systems over Finite Fields: Improved Analysis of the Hybrid Approach 
9:25 – 9:50  Sergei Abramov, Denis Khmelnov. On valuations of meromorphic solutions of arbitraryorder linear difference systems with polynomial coefficients  Martin Albrecht. The M4RIE library for dense linear algebra over small fields with even characteristic 
9:50 – 10:15  Olivier Bournez, Daniel Graça and Amaury Pouly. On the complexity of solving polynomial initial value problems  Jérémy Berthomieu and Romain Lebreton. Relaxed padic Hensel lifting for algebraic systems 
10:15 – 10:35  Coffee break  hall  
10:35 – 11:35 
Frits Beukers Hypergeometric functions: computational aspects 

2A Closed form solutions of differential equations  2B  Algebraic geometry  
11:40 – 12:05  Moulay Barkatou, Thomas Cluzeau, Carole El Bacha and JacquesArthur Weil. Computing Closed Form Solutions of Integrable Connections  Paolo Lella. An efficient implementation of the algorithm computing the Borelfixed points of a Hilbert scheme 
12:05 – 12:30  Moulay A. Barkatou and Clemens G. Raab. Solving Linear Ordinary Differential Systems in Hyperexponential Extensions  Peter Scheiblechner. Effective de Rham Cohomology  The Hypersurface Case 
12:30 – 14:15  Lunch break  Diderot 770 avenue Centrale 

Software presentation 1 : User interfaces  Poster 1  
14:15 – 14:40  Xiaoyu Chen and Dongming Wang. GeoText: an intelligent dynamic geometric textbook  List of accepted posters 
14:40 – 15:05  Joris van der Hoeven, Andrey Grozin, Massimiliano Gubinelli, Grégoire Lecerf, François Poulain and Denis Raux. GNU TEXMACS: a scientific editing platform  
Software presentation 2 : Support libraries  Poster 2  
15:10 – 15:35  Roman Pearce and Michael Monagan. POLY: a new polynomial data structure for MAPLE  List of accepted posters 
15:35 – 16:00  Alexander Konovalov.Parallel programming support in GAP  
16:00 – 16:25  Eva Garcia Llorente and Isabel Bermejo. MONOMIALIDEAL.LIB  
16:25 – 16:45  Coffee break  hall  
3A  Topology  3B  Fast polynomial arithmetic  
16:45 – 17:10  Ana Romero and Francis Sergeraert. Programming before Theorizing, a case study  Matthew Comer, Erich Kaltofen and Clément Pernet. Sparse Polynomial Interpolation and Berlekamp/Massey Algorithms That Correct Outlier Errors in Input Values 
17:10 – 17:35  Stavros Garoufalidis and Christoph Koutschan. Twisting qholonomic sequences by complex roots of unity  Francesco Biscani. Parallel sparse polynomial multiplication on modern hardware architectures 
17:35 – 18:00  James McCarron. Small Homogeneous Quandles  Joris van der Hoeven and Gregoire Lecerf. On the complexity of multivariate blockwise polynomial multiplication 
18:00 – 19:15  ISSAC 2012 business meeting 
Tuesday, July 24
Registrations opening at 8:30
Time  Amphi A010  Amphi A022 

4A  Creative telescoping  4B  Real polynomial system solving  
9:00 – 9:25  Shaoshi Chen, Manuel Kauers and Michael Singer. Telescopers for Rational and Algebraic Functions via Residues  Pavel Emeliyanenko and Michael Sagraloff. On the Complexity of Solving a Bivariate Polynomial System 
9:25 – 9:50  Shaoshi Chen and Manuel Kauers. OrderDegree Curves for Hypergeometric Creative Telescoping  JeanCharles Faugère, Mohab Safey El Din and PierreJean Spaenlehauer. Critical Points and Grobner Bases: the Unmixed Case 
9:50 – 10:15  Masao Ishikawa and Christoph Koutschan. Zeilberger's Holonomic Ansatz for Pfaffians  Adam Strzeboński. Solving Polynomial Systems over Semialgebraic Sets Represented by Cylindrical Algebraic Formulas 
10:15 – 10:35  Coffee break  hall  
10:35 – 11:35 
Volker Strassen Asymptotic spectrum and matrix multiplication 

5A  Univariate polynomial root computation  5B  Matrix normal form  
11:40 – 12:05  Adam Strzeboński and Elias Tsigaridas. Univariate real root isolation in multiple extension fields  Wei Zhou, George Labahn and Arne Storjohann. Computing Minimal Nullspace Bases 
12:05 – 12:30  Michael Sagraloff. When Newton meets Descartes: A Simple and Fast Algorithm to Isolate the Real Roots of a Polynomial  Colton Pauderis and Arne Storjohann. Deterministic unimodularity certification 
12:30 – 14:15  Lunch break  Diderot 770 avenue Centrale 

6A  Solving polynomial systems with symmetries  6B  Exotic factorizations  
14:15 – 14:40  Jules Svartz and JeanCharles Faugère. Solving Polynomial Systems Globally Invariant Under an Action of the Symmetric Group and Application to the Equilibria of N vortices in the Plane  Raoul Blankertz, Joachim von Zur Gathen and Konstantin Ziegler. Compositions and collisions at degree p^2 
14:40 – 15:05  Romain Lebreton and Éric Schost. Algorithms for the universal decomposition algebra  Mingbo Zhang and Yong Luo. Factorization of Differential Operators with Ordinary Differential Polynomial Coefficients 
Software presentation 3 : Packages  Poster 3  
15:10 – 15:35  Anja Korporal, George Regensburger and Markus Rosenkranz. Symbolic computation for ordinary boundary problem in MAPLE  List of accepted posters 
15:35 – 16:00  Flavia Stan, Moulay Barkatou and Eckhard Pflueghel. ISOLDE  A MAPLE package for linear functional equations  
16:00 – 16:25  Guillaume Quintin. The DECODING library for list decoding  
16:25 – 16:45  Coffee break  hall  
7A  Univariate polynomial root computation  7B  Matrix normal form  
16:45 – 17:10  Danko Adrovic and Jan Verschelde. Computing Puiseux Series for Algebraic Surfaces  JeanFrançois Biasse and Claus Fieker. A polynomial time algorithm for computing the HNF of a module over the integers of a number field 
17:10 – 17:35  Andre Galligo and Maria Emilia Alonso. A Root Isolation Algorithm for Sparse Univariate Polynomials  Mustafa Elsheikh, Mark Giesbrecht, Andy Novocin and B. David Saunders. Fast Computation for Smith Forms of Sparse Matrices Over Local Rings 
17:35 – 18:00  Vikram Sharma and Chee Yap. Near Optimal Tree Size Bounds on a Simple Real Root Isolation Algorithm  Evelyne Hubert and George Labahn. Rational invariants of scalings from Hermite normal forms 
18:00 – 18:25  Industrial partners presentations  
18:30 – 19:30  Gondolas to access the Bastille (last gondola at 00:15) Téléphérique Grenoble Bastille, Quai Stéphane Jay 38000 Grenoble 

19:30   Banquet  restaurant du Téléphérique 5, Fort de la Bastille 38000 Grenoble. 
Wednesday, July 25
Registrations opening at 8:30
Abstract: In this paper, we present an algorithm to factor a differential operator $L=\sigma^n+c_{n1}\sigma^{n1}+\cdots+c_1\sigma+c_0$ with coefficients $c_i$ in $\K\{y\}$, where $\K$ is a constant field and $\K\{y\}$ is the ordinary differential polynomial ring over $\K$. Also, we discuss the applications of the algorithm in decomposing nonlinear differential polynomials and factoring differential operators with coefficients in the extension field of $\K$.
Abstract: We describe algorithms and implementations for linear algebra with dense matrices over GF(2^e) for 2 <= e <= 10. Our main contributions are: (a) the notion of NewtonJohn tables to avoid scalar multiplications in Gaussian elimination and matrix multiplication, (b) an efficient implementation of Karatsubastyle multiplication for matrices over extension fields of GF(2) and (c) a description of an opensource library  called M4RIE  providing the fastest known implementation of dense linear algebra over GF(2^e) for 2 <= e <= 10.
Abstract: We show that the problem of constructing telescopers for rational functions of m variables is equivalent to the problem of constructing telescopers for algebraic functions of m  1 variables and present a new algorithm to construct telescopers for algebraic functions of two variables. These considerations are based on analyzing the residues of the input. According to experiments, the resulting algorithm for rational functions of three variables is faster than known algorithms, at least in some examples of combinatorial interest. The algorithm for algebraic functions implies a new bound on the order of the telescopers.
Abstract: We prove an effective bound for the degrees of generators of the algebraic de Rham cohomology of smooth affine hypersurfaces. In particular, we show that the de Rham cohomology of a smooth hypersurface of degree d in C^n can be generated by differential forms of degree O(n 2^n d^{n^2}). This result is relevant for the algorithmic computation of the cohomology, but is also motivated by questions in the theory of ordinary differential equations related to the infinitesimal Hilbert 16th problem.
Abstract: In this paper we present a deterministic algorithm for the computation of a minimal nullspace basis of an $m\times n$ input matrix of univariate polynomials over a field $\mathbb{K}$ with $m\le n$. This algorithm computes a minimal nullspace basis of a degree $d$ input matrix with a cost of $O^{\sim}\left(n^{\omega}\left\lceil md/n\right\rceil \right)$ field operations in $\mathbb{K}$. The same algorithm also works in the more general situation on computing a shifted minimal nullspace basis, with a given degree shift $\vec{s}\in\mathbb{Z}^{n}$ whose entries bound the corresponding column degrees of the input matrix. If $\rho$ is the sum of the $m$ largest entries of $\vec{s}$, then a $\vec{s}$minimal right nullspace basis can be computed with a cost of $O^{\sim}(n^{\omega}\rho/m)$ field operations.
Abstract: Signaturebased algorithms, including F5, F5C, G2V and GVW, are efficient algorithms for computing Gröbner bases in polynomial rings. A signaturebased algorithm is presented in current paper to compute Gröbner bases in solvable polynomial rings, which include usual commutative polynomial rings and some noncommutative polynomial rings like Weyl algebra. The generalized rewritten criterion (proposed in Sun and Wang 2011) is used to construct this new algorithm. When this new algorithm uses the order implied by GVW, its termination is proved without special assumptions on the computing order of critical pairs. Data structures similar to F5 can be used to speed up this new algorithm, and Gröbner bases for corresponding syzygy modules can be obtained from the outputs in polynomial time. Experimental data shows that most redundant computations can be avoided in this new algorithm.
Abstract: This paper relates how a "simple" result in combinatorial homotopy finally led to a totally new understanding of basic theorems in Algebraic Topology, namely the EilenbergZilber theorem, the twisted EilenbergZilber theorem, and finally the EilenbergMacLane correspondance between the Classifying Space and Bar constructions. In the last case, it was an amazing lucky consequence of computations based on conjectures not yet proved. The key new tool used in this context is Robin Forman's Discrete Vector Fields theory.
Abstract: Scalings form a class of group actions on affine spaces that have both theoretical and practical importance. A scaling is accurately described by an integer matrix. Tools from integer linear algebra are exploited to compute a minimal generating set of rational invariants, trivial rewriting and rational sections for such a group action. The primary tools used are Hermite normal forms and their unimodular multipliers. With the same line of ideas, a complete solution to the scaling symmetry reduction of a polynomial system is also presented.
Abstract: The problem of isolating all real roots >of a squarefree integer polynomial $f(X)$ inside any given interval $I_0$ is a fundamental problem. EVAL is a simple and practical exact numerical algorithm for this problem: it recursively bisects $I_0$, and any subinterval $I\ib I_0$, until a certain numerical predicate $C_0(I)\lor C_1(I)$ holds on each $I$. We prove that the size of the recursive bisection tree is $$O(d(L+r+\log d))$$ where $f$ has degree $d$, its coefficients have absolute values $<2^L$, and $I_0$ contains $r$ roots of $f$. In the range $L\ge d$, our bound is the sharpest known, and provably optimal. Our results are closely paralleled by recent bounds on EVAL by SagraloffYap (ISSAC 2011) and BurrKrahmer (2012). In the range $L\le d$, our bound is incomparable with those of SagraloffYap or BurrKrahmer. Similar to the BurrKrahmer proof, we exploit the technique of ``continuous amortization'' from BurrKrahmerYap (2009), namely to bound the tree size by an integral $\int_{I_0} G(x)dx$ over a suitable ``charging function'' $G(x)$. The introduction of the outputsize parameter $r$ seems new. We give an application of this feature to the problem of rayshooting (i.e., finding smallest root in a given interval).
Abstract: Cylindrical algebraic formulas are an explicit representation of semialgebraic sets as finite unions of cylindrically arranged disjoint cells bounded by graphs of algebraic functions. We present a version of the Cylindrical Algebraic Decomposition (CAD) algorithm customized for solving systems of polynomial equations and inequalities over semialgebraic sets given in this representation. The algorithm can also be used to solve conjunctions of polynomial conditions in an incremental manner. We show application examples and give an empirical comparison of incremental and direct CAD computation.
Abstract: We consider a univariate polynomial $f$ with real coefficients having a high degree $N$ but a rather small number $d+1$ of monomials, with $d<<N$. Such a sparse polynomial has a number of real root smaller or equal to $d$. Our target is to find for each real root of $f$ an interval isolating this root from the others. The usual subdivision methods relying either on Sturm sequences or Moebius transform followed by Descarte's rule of sign destruct the sparse structure. Our approach relies on the generalized BudanFourier of Coste, Lajous, Lombardi, Roy [CLLR:2005] and the techniques developed in Galligo [Gal:2011]. To such a $f$ is asociated a set of $n$ differentiation operators called $\f$derivations. The BudanFourier function $V_f(x)$ counts the sign changes in the sequence of $\f$derivatives of $f$ evaluated at $x$. The values at which this function jumps are called the $\f$virtual roots of $f$, these include the real roots of $f$. We also consider the augmented $\f$virtual roots of $f$ and introduce a genericity property which eases our study and its presentation. We present a fast root isolation method and an algorithm which has been implemented in Maple. We rely on an improved generalized BudanFourier count applied to both the input polynomial and its reciprocal, together with NewtonHalley approximation steps.
Abstract: We introduce a novel algorithm denoted NEWDSC to isolate the real roots of a univariate squarefree polynomial f with integer coefficients. The algorithm iteratively subdivides an initial interval which is known to contain all real roots of f and performs exact operations on the coefficients of f in each step. For the subdivision strategy, we combine Descartes' Rule of Signs and Newton iteration. More precisely, instead of using a fixed subdivision strategy such as bisection in each iteration, a Newton step based on the number of sign variations for an actual interval is considered, and, only if the Newton step fails, we fall back to bisection. Following this approach, our analysis shows that, for most iterations, quadratic convergence towards the real roots is achieved. In terms of complexity, our method induces a recursion tree of almost optimal size O(n\log(n\tau)), where n denotes the degree of the polynomial and \tau the bitsize of its coefficients. The latter bound constitutes an improvement by a factor of \tau upon all existing subdivision methods for the task of isolating the real roots. In addition, we provide a bit complexity analysis showing that NEWDSC needs only \tilde{O}(n^3\tau) bit operations to isolate all real roots of f. This matches the best bound known for this fundamental problem. However, in comparison to the significantly more involved numerical algorithms by V. Pan and A. Schönhage which achieve the same bit complexity for the task of isolating all complex roots, NEWDSC focuses on real root isolation, is much easier to access and to implement.
Abstract: We study the complexity of computing the real solutions of a bivariate polynomial system using the recently presented algorithm BISOLVE~\cite{besbisolve2011}. BISOLVE is a classical elimination method which first projects the solutions of a system onto the x and yaxes and, then, selects the actual solutions from the so induced candidate set. However, unlike similar algorithms, BISOLVE requires no genericity assumption on the input nor it needs any change of the coordinate system. Furthermore, extensive benchmarks from~\cite{besbisolve2011} confirm that the algorithm outperforms state of the art approaches by a large factor. In this paper, we show that, for two polynomials f,g\in\mathbb{ZZ}[x,y] of total degree at most $n$ with integer coefficients bounded in absolute value by 2^\tau, BISOLVE computes isolating boxes for all real solutions of the system f=g=0 using \Otilde(n^8+n^7\tau) bit operations, thereby improving the previous record bound by four magnitudes.
Abstract: The sporadic simple group Monster acts on the ConwayGriessNorton (CGN) algebra, which is a real algebra $V_M$ of dimension 196,884, equipped with a positive definite scalar product and a bilinear, commutative, and nonassociative algebra product. Certain properties of idempotents in $V_M$, that correspond to 2A involutions in the Monster, have been axiomatized by Ivanov as the Majorana representation of the Monster. The axiomatization enables us to talk about Majorana representations of arbitrary groups $G$ that are generated by involutions. In general, a Majorana representation may or may not exist, but if $G$ is isomorphic to a subgroup of the Monster and a representation is isomorphic to the corresponding subalgebra of $V_M$ then we say that the Majorana representation is based on an embedding of $G$ in the Monster. In this paper, we describe a generic theoretical procedure to construct Majorana representations, and a GAP computer program that implements the procedure. It turns out that in many cases the representations are based on embeddings in the Monster, thereby providing a valuable tool of studying subalgebras of the CGN algebra that were unaccessible in the 196,884dimensional setting.
Abstract: Creative telescoping applied to a bivariate proper hypergeometric term produces linear recurrence operators with polynomial coeffcients, called telescopers. We provide bounds for the degrees of the polynomials appearing in these operators. Our bounds are expressed as curves in the (r,d)plane which assign to every order r a bound on the degree d of the telescopers. These curves are hyperbolas, which reflect the phenomenon that higher order telescopers tend to have lower degree, and vice versa.
Abstract: We present a variation of the modular algorithm for computing the pseudoHNF of an OKmodule presented by Cohen, where OK is the ring of integers of a number field K. The modular strategy was conjectured to run in polynomial time by Cohen, but so far, no such proof was available in the literature. In this paper, we provide a new method to prevent the coefficient explosion and we rigorously assess the complexity with respect to the size of the input and the invariants of the field K.
Abstract: We study tight bounds and fast algorithms for LCLMs of several linear differential operators with polynomial coefficients. We analyse the worstcase arithmetic complexity of existing algorithms for LCLMs, as well as the size of their outputs. We propose a new algorithm that reduces the LCLM computation to a linear algebra problem on a polynomial matrix. The new algorithm yields sharp bounds on the coefficient degrees of the LCLM, improving by two orders of magnitude the previously known bounds. The complexity of the new algorithm is almost optimal, in the sense that it nearly matches the arithmetic size of the output.
Abstract: We derive an algorithm for computing all the homogeneous quandles of a given order n provided that a list of the transitive permutation groups of degree n are known. We discuss the implementation of the algorithm, and use it to enumerate the number of isomorphism classes of homogeneous quandles up to order 23 and compute representatives for each class. We also completely determine the homogeneous quandles of prime order. As a byproduct, we are able to replicate an earlier calculation of the connected quandles of order at most 30 and, based on this, to compute the number of isomorphism classes of simple quandles to the same order.
Abstract: We present algorithms to compute the Smith normal form of matrices over two families of local rings. The algorithms use the black box model which is suitable for sparse and structured matrices. The algorithms depend on a number of tools, such as matrix rank computation over finite fields, for which the bestknown time and memoryefficient algorithms are probabilistic. For an n by n matrix A over the ring F[z]/(f^e), where f^e is a power of an irreducible polynomial f in F[z] of degree d, our algorithm requires O(μ*de^2n) operations in F, where our black box is assumed to require O(μ) operations in F to compute a matrixvector product by a vector over F[z]/(f^e) (and μ is assumed greater than den. The algorithm only requires additional storage for O(den) elements of F. In particular, if μ=O(den), then our algorithm requires only O~(n^2d^2e^3) operations in F, which is an improvement on previous methods for small d and e. For the ring Z/p^eZ, where p is a prime, we give an algorithm which is time and memoryefficient when the number of nontrivial invariant factors is small. We describe a method for dimension reduction while preserving the invariant factors. The runtime is essentially linear in neμ log(p), where μ is the cost of blackbox evaluation (assumed greater than n). To avoid the practical cost of conditioning, we give a Monte Carlo certificate, which at low cost, provides either a high probability of success or a proof of failure. The quest for a time and memory efficient solution without the restriction on number of nontrivial invariant factors remains open. We offer a conjecture which may contribute toward that end.
Abstract: We deploy numerical semidefinite programming and conversion to exact rational inequalities to certify that for a positive semidefinite input polynomial or rational function, any representation as a fraction of sumsofsquares of polynomials with real coefficients must contain polynomials in the denominator of degree no less than a given input lower bound. By Artin's solution to Hilbert's 17th problems, such representations always exist for some denominator degree. Our certificates of infeasibility are based on the generalization of Farkas' Lemma to semidefinite programming. The literature has many famous examples of impossibility of SOS representability including Motzkin's, Robinson's, Choi's and Lam's polynomials, and Reznick's lower degree bounds on uniform denominators, e.g., powers of the sumofsquares of each variable. Our work on exact certificates for positive semidefiniteness allows for nonuniform denominators, which can have lower degree and are often easier to convert to exact identities. Here we demonstrate our algorithm by computing certificates of impossibilities for an arbitrary sumofsquares denominator of degree 2 and 4 for some symmetric sextics in 4 and 5 variables, respectively. We can also certify impossibility of base polynomials in the denominator of restricted term structure, for instance as in Landau's reduction by one less variable.
Abstract: We present a high performance algorithm for the parallel multiplication of sparse multivariate polynomials on modern computer architectures. The algorithm is built on three main concepts: a cachefriendly hash table implementation for the storage of polynomial terms in distributed form, a statistical method for the estimation of the size of the multiplication result, and the use of Kronecker substitution as a homomorphic hash function. The algorithm achieves high performance by promoting data access patterns that favour temporal and spatial locality of reference. We present benchmarks comparing our algorithm to routines of other computer algebra systems, both in sequential and parallel mode.
Abstract: In a previous article, an implementation of lazy padic integers with a multiplication of quasilinear complexity, the socalled relaxed product, was presented. Given a ring R and an element p in R, we design a relaxed Hensel lifting for algebraic systems from R/(p) to the padic completion R_p of R. Thus, any root of linear and algebraic regular systems can be lifted with a quasioptimal complexity. We report our implementations in C++ within the computer algebra system Mathemagix and compare them with Newton operator. As an application, we solve linear systems over the integers and compare the running times with Linbox and IML.
Abstract: Algorithms for computing lower bounds on valuations (e.g., orders of the poles) of the components of meromorphic solutions of arbitraryorder linear difference systems with polynomial coefficients are considered. In addition to algorithms based on ideas which have been already utilized in computer algebra for treating normal firstorder systems, a new algorithm using "tropical" calculations is proposed. It is shown that the latter algorithm is rather fast, and produces the bounds with good accuracy.
Abstract: We present algorithmic, complexity and implementation results for the problem of isolating the real roots of a univariate polynomial in $\Ba \in L[y]$, where $L=\QQ(\alpha_1, \dots, \alpha_{\ell})$ is an algebraic extension of the rational numbers. Our bounds are single exponential in $\ell$ and match the ones presented in \cite{stissac2011} for the case $\ell=1$. We consider two approaches. The first, indirect approach, using multivariate resultants, computes a univariate polynomial with integer coefficients, among the real roots of which are the real roots of $\Ba$. The Boolean complexity of this approach is $\sOB(N^{4\ell+4})$, where $N$ is the maximum of the degrees and the coefficient bitsize of the involved polynomials. The second, direct approach, tries to solve the polynomial directly, without reducing the problem to a univariate one. We present an algorithm that generalizes Sturm algorithm from the univariate case, and modified versions of well known solvers that are either numerical or based on Descartes' rule of sign. We achieve a Boolean complexity of $\sOB(\min\set{N^{4\ell + 7},N^{2\ell^2+6}})$ and $\sOB( \max\set{N^{\ell+5}, N^{2\ell+3}})$, respectively. We implemented the algorithms in \func{C} as part of the core library of MATHEMATICA and we illustrate their efficiency over various data sets.
Abstract: We present an algorithm to compute the annihilator of (i.e., the linear differential equations for) the multivalued analytic function $f^\lambda(\log f)^m$ in the ring $D_n$ of differential operators for a given nonconstant polynomial $f$, a nonnegative integer $m$, and a complex number $\lambda$. This algorithm consists in the differentiation with respect to $s$ of the annihilator of $f^s$ in the ring $D_n[s]$ and ideal quotient computation in $D_n$. The obtained differential equations constitute what is called a holonomic system in $D$module theory. Hence combined with the integration algorithm for $D$modules, this enables us to compute a holonomic system for the integral of a function involving the logarithm of a polynomial with respect to some variables.
Abstract: We present algorithms for computing rational and hyperexponential solutions of linear $D$finite partial differential systems written as integrable connections. We show that these types of solutions can be computed recursively by adapting existing algorithms handling ordinary linear differential systems. We provide an arithmetic complexity analysis of the algorithms that we develop. A Maple implementation is available and some examples and applications are given.
Abstract: Let k be a field and let f be a polynomial of degree n in k [T]. The universal decomposition algebra A is the quotient of k [X_1, ..., X_n] by the ideal of symmetric relations (those polynomials that vanish on all permutations of the roots of f). We show how to obtain efficient algorithms to compute in A. We use a univariate representation of A, i.e. an isomorphism of the form A = k [T]/Q(T), since in this representation, arithmetic operations in A are known to be quasioptimal. We give details for two related algorithms, to find the isomorphism above, and to compute the characteristic polynomial of any element of A.
Abstract: We provide algorithms computing power series solutions of a large class of differential or qdifferential equations. Their number of arithmetic operations grows linearly with the precision, up to logarithmic terms.
Abstract: Borelfixed ideals play a key role in the study of Hilbert schemes. Indeed each component and each intersection of components of a Hilbert scheme contains at least one Borelfixed point, i.e. a point corresponding to a subscheme defined by a Borelfixed ideal. Moreover Borelfixed ideals have good combinatorial properties, which make them very interesting in an algorithmic perspective. In this paper, we propose an implementation of the algorithm computing all the saturated Borelfixed ideals with number of variables and Hilbert polynomial assigned, introduced from a theoretical point of view in the paper "Segment ideals and Hilbert schemes of points", Discrete Mathematics 311 (2011).
Abstract: A sequence $f_n(q)$ is $q$holonomic if it satisfies a nontrivial linear recurrence with coefficients polynomials in $q$ and $q^n$. Our main theorem states that $q$holonomicity is preserved under twisting, i.e., replacing $q$ by $\omega q$ where $\omega$ is a complex root of unity. Our proof is constructive, works in the multivariate setting of $\partial$finite sequences and is implemented in the Mathematica package HolonomicFunctions. Our results are illustrated by twisting natural $q$holonomic sequences which appear in quantum topology, namely the colored Jones polynomial of pretzel knots and twist knots. The recurrence of the twisted colored Jones polynomial can be used to compute the asymptotics of the Kashaev invariant of a knot at an arbitrary complex root of unity.
Abstract: \begin{abstract} We propose an efficient algorithm to solve polynomial systems of which equations are \emph{globally} invariant under an action of the symmetric group \(\mathfrak{S}_N\) where it acts on the variable \(x_{i}\) where the number of variables is a multiple of \(N\). For instance, we can assume that swapping two variables (or two pairs of variables) in one equation give rise to another equation of the system (perhaps changing the sign). The idea is to apply many times divided difference operators to the original system in order to derive a new system of equations involving only the symmetric functions of a subset of the variables. The next step is to solve the system using Gröbner techniques; this is usually several order faster than computing the Gröbner basis of the original system since the number of solutions of the corresponding ideal has been divided by at least \(N!\). To illustrate the algorithm and to demonstrate its efficiency, we apply the method to a well known physical problem called equilibria positions of vortices. This problem has been studied for almost 150 years and goes back to work by Lord Kelvin. Assuming that all vortices have same vorticity, the problem can be reformulated as a system polynomial equations invariant under an action of $\mathfrak{S}_N$. Using numerical methods, physicists have been able to compute solutions up to $N\leq 7$ but it was an open challenge to check whether the set of solution is complete. Direct naive approach of Gröbner bases techniques give rise to hardtosolve polynomial system: for instance, when \(N=5\), it take several hours to compute the Gröbner basis and the number of solutions is \(2060\). By contrast, applying the new algorithm to the same problem give rise to a system of \(17\) solutions that can be solved in less than \(0.1\) sec. Moreover, we are able to compute \emph{all} equilibria when \(N\leq8\) (the case \(N=8\) being completely new). \end{abstract}
Abstract: We report on our experiences exploring state of the art Groebner basis computation. We investigate signature based algorithms in detail. We also introduce new practical data structures and computational techniques for use in both signature based Groebner basis algorithms and more traditional variations of the classic Buchberger algorithm. Our conclusions are based on experiments using our new freely available open source standalone C++ library.
Abstract: In this paper we prove that computing the solution to a initialvalue problem of the form $\dot{y}=p(y)$ with initial condition $y(t_0)=y_0\in\R^d$ at time $t_0+T$ with precision $e^{\mu}$ where $p$ is a vector of polynomial can be done in time polynomial in the value of $T$, $\mu$ and $Y=\sup_{t_0\leqslant u\leqslant T}\infnorm{y(u)}$. Contrary to existing results, our algorithm works for any vector of polynomial $p$ over any bounded or unbounded domain and has a guaranteed complexity and precision. In particular we do not assume $p$ to be fixed, or the solution to lie in a compact domain, nor we assume that $p$ has a Lipschitz constant.
Abstract: In this article, we study the problem of multiplying two multivariate polynomials which are somewhat but not too sparse, typically like polynomials with convex supports. We design and analyze an algorithm which is based on blockwise decomposition of the input polynomials, and which performs the actual multiplication in an FFT model or some other more general so called "evaluated model". If the input polynomials have total degrees at most d, then, under mild assumptions on the coefficient ring, we show that their product can be computed with O(s^1.5337) ring operations, where s denotes the number of all the monomials of total degree at most 2d.
Abstract: We consider the problem of computing critical points of the restriction of a polynomial map to an algebraic variety. This is of first importance since the global minimum of such a map is reached at a critical point. Thus, these points appear naturally in nonconvex polynomial optimization which occurs in a wide range of scientific applications (control theory, chemistry, economics,...). Critical points also play a central role in recent algorithms of effective real algebraic geometry. Experimentally, it has been observed that Gröbner basis algorithms are efficient to compute such points. Therefore, recent software based on the socalled Critical Point Method are built on Gröbner bases engines. Let $f_1, \ldots, f_p$ be polynomials in $ \Q[x_1, \ldots, x_n]$ of degree $D$, $V\subset\C^n$ be their complex variety and $\pi_1$ be the projection map $(x_1,\ldots, x_n)\mapsto x_1$. The critical points of the restriction of $\pi_1$ to $V$ are defined by the vanishing of $f_1, \ldots, f_p$ and some maximal minors of the Jacobian matrix associated to $f_1, \ldots, f_p$. Such a system is algebraically structured: the ideal it generates is the sum of a determinantal ideal and the ideal generated by $f_1,\ldots, f_p$. We provide the first complexity estimates on the computation of Gröbner bases of such systems defining critical points. We prove that under genericity assumptions on $f_1,\ldots, f_p$, the complexity is polynomial in the generic number of critical points, i.e. $D^p(D1)^{np}{{n1}\choose{p1}}$. More particularly, in the quadratic case $D=2$, the complexity of such a Gröbner basis computation is polynomial in the number of variables $n$ and exponential in $p$. We also give experimental evidence supporting these theoretical results.
Abstract: In this paper, we generalized the construction of border bases to nonzero dimensional ideals for normal forms compatible with the degree, tackling the remaining obstacle for a general application of border basis methods. First, we give conditions to have a border basis up to a given degree. Next, we describe a new stopping criteria to determined when the reduction with respect to the leading terms is a normal form. This test based on the persistence and regularity theorems of Gotzmann yields a new algorithm for computing a border basis of any ideal, which proceeds incrementally degree by degree until its regularity. We detail it, prove its correctness, present its implementation and report some experimentations which illustrates its practical good behavior.
Abstract: In this paper we outline an algorithmic approach to compute Puiseux series expansions for algebraic surfaces. The series expansions originate at the intersection of the surface with as many coordinate planes as the dimension of the surface. Our approach starts with a polyhedral method to compute cones of normal vectors to the Newton polytopes of the given polynomial system that defines the surface. If as many vectors in the cone as the dimension of the surface define an initial form system that has isolated solutions, then those vectors are potential tropisms for the initial term of the Puiseux series expansion. Our preliminary methods produce exact representations for solution sets of the cyclic $n$roots problem, for $n = m^2$, corresponding to a result of Backelin.
Abstract: The asymptotically fastest algorithms for many linear algebra problems on integer matrices, including solving a system of linear equations and computing the determinant, use highorder lifting. Currently, highorder lifting requires the use of a randomized shifted number system to detect and avoid errorproducing carries. By interleaving quadratic and linear lifting, we devise a new algorithm for highorder lifting that allows us to work in the usual symmetric range modulo $p$, thus avoiding randomization. As an application, we give a deterministic algorithm to assay if an $n \times n$ integer matrix $A$ is unimodular. The cost of the algorithm is $O((\log n) n^{\omega}\, \M(\log n + \log A))$ bit operations, where $A$ denotes the largest entry in absolute value, and $\M(t)$ is the cost multiplying two integers bounded in bit length by $t$.
Abstract: Let F be a differential field generated from the rational functions over some constant field by one hyperexponential extension. We present an algorithm to compute the solutions in F^n of systems of n first order linear ODEs. Solutions in F of a scalar ODE of higher order can be determined by an algorithm of Bronstein and Fredet. Our approach avoids reduction to the scalar case. We also give examples how this can be applied to integration.
Abstract: We describe an iterative refinement procedure for computing extended precision or exact solutions to linear programming problems (LPs). Arbitrarily precise solutions can be computed by solving a sequence of closely related LPs with limited precision arithmetic. The LPs solved at iterations of this algorithm share the same constraint matrix as the original problem instance and are transformed only by modification of the objective function, righthand side, and variable bounds. Exact computation is used to compute and store the exact representation of the transformed problems, while numeric computation is used for computing approximate LP solutions and applying iterations of the simplex algorithm. At all steps of the algorithm the LP bases encountered in the transformed problems correspond directly to LP bases in the original problem description. We demonstrate that this algorithm is effective in practice for computing extended precision solutions and that this leads to direct improvement of the best known methods for solving LPs exactly over the rational numbers. A proofofconcept implementation is done within the SoPlex LP solver.
Abstract: The Polynomial System Solving (PoSSo) problem is a fundamental NPHard problem in computer algebra. Among many others, PoSSo have applications in area such as coding theory and cryptology. Typically, the security of cryptographic multivariate publickey schemes (MPKC) such as the UOV cryptosystem of Kipnis, Shamir and Patarin is directly related to the hardness of PoSSo over finite fields. The goal of this paper is to further understand the influence of finite fields on the hardness of PoSSo. To this end, we consider the socalled {\it hybrid approach}. This is a polynomial system solving method dedicated to finite fields proposed by Bettale, Faug\`ere and Perret (Journal of Mathematical Cryptography, 2009). The idea is to combine exhaustive search with Gröbner bases. The efficiency of the hybrid approach is related to the choice of a tradeoff between the two methods. We propose here an improved complexity analysis dedicated to quadratic systems. Whilst the principle of the hybrid approach is simple, its careful analysis leads to rather surprising and somehow unexpected results. We first prove that the best tradeoff (i.e. number of variables to be fixed) allowing to minimize the complexity is achieved by fixing a number of variables proportional to the number of variables $n$ of the system considered. Under some natural algebraic assumption, we then show that the asymptotic complexity of the hybrid approach is $2^{(3.313.62\,\log_2\left(q\right)^{1})\, n}$, where $q$ is the size of the field (under the condition in particular that $\log(q)\ll n$). This is to date, the best complexity for solving PoSSo over finite fields (when $q>2$). Indeed, we have been able to quantify the gain provided by the hybrid approach compared to a direct Gröbner basis method. For quadratic systems, we show (assuming a natural algebraic assumption) that this gain is exponential in the number of variables. Asymptotically, the gain is $2^{1.49\,n}$ when both $n$ and $q$ grow to infinity and $\log(q)\ll n$.
Abstract: We propose algorithms performing sparse interpolation with errors, based on Prony's / BenOr's & Tiwari's algorithm, using a Berlekamp/Massey algorithm with early termination. First, we give a randomized algorithm that can determine a t sparse polynomial f, where f has exactly t nonzero terms, from a bound T >= t and a sequence of N= (2T+1)(e+1) evaluations f(p^i), where i=1,2,3,...,N and p a field element, in the presence of <= e wrong evaluations in the sequence, that are spoiled either with random or misleading errors. We also investigate the problem of recovering the minimal linear generator from a sequence of field elements that are linearly generated but where again <= e elements are erroneous. We show that there exist sequences of < 2t(2e+1) elements, such that two distinct generators of length t satisfy the linear recurrence up to <= e faults, at least if the field has a characteristic unequal 2. Uniqueness can be proven (for any field characteristic) for length >= 2t(2e+1) of the sequence with <= e errors. Finally, we present the Majority Rule Berlekamp/Massey algorithm, which can recover the unique minimal linear generator of degree t when given bounds T >= t and E >= e and the initial sequence segment of 2T(2E+1) elements. The latter yields a unique sparse interpolant for the first problem. This research is motivated by the sparse interpolation algorithms with numeric noise, into which we now can bring outlier errors in the values.
Abstract: A variation of Zeilberger's holonomic ansatz for symbolic determinant evaluations is proposed which is tailored to deal with Pfaffians. The method is also applicable to determinants of skewsymmetric matrices, for which the original approach does not work. As Zeilberger's approach is based on the Laplace expansion (cofactor expansion) of the determinant, we derive our approach from the cofactor expansion of the Pfaffian. To demonstrate the power of our method, we prove, using computer algebra algorithms, some conjectures proposed in the paper "Pfaffian decomposition and a Pfaffian analogue of $q$Catalan Hankel determinants" by Ishikawa, Tagawa, and Zeng. A minor summation formula related to partitions and Motzkin paths follows as a corollary.
Abstract: A univariate polynomial $f$ over a field is decomposable if $f= g \circ h= g(h)$ for nonlinear polynomials $g$ and $h$. In order to count the decomposables, one has to know the number of equaldegree collisions, that is $f = g \circ h = g^* \circ h^*$ with $(g,h) \neq (g^{*}, h^{*})$ and $\deg g = \deg g^*$. Such collisions only occur in the wild case, where the field characteristic $p$ divides $\deg f$. Reasonable bounds on the number of decomposables over a finite field are known, but they are less sharp in the wild case, in particular for degree $p^2$. We provide a classification of all polynomials of degree $p^2$ with a collision. This yields the exact number of decomposable polynomials of degree $p^{2}$ over a finite field of characteristic $p$. We also present an algorithm that determines whether a given polynomial of degree $p^{2}$ has a collision or not.
Abstract: In this paper, we propose a new algorithm for computing real roots of polynomial equations or a subset of real roots in a given semialgebraic set described by additional polynomial inequalities. The algorithm is based on using modified fixed point continuation method for solving Lasserre's hierarchy of moment relaxations. We establish convergence properties for our algorithm. For a largescale polynomial system with only few real solutions in a given area, we can extract them quickly. Moreover, for a polynomial system with an infinite number of real solutions, our algorithm can also be used to find some isolated real solutions or real solutions on the manifolds.
Abstract: In order to keep up with the demand for solutions to problems with everincreasing data sets, both academia and industry have embraced commodity computer clusters with locally attached disks or SANs as an inexpensive alternative to supercomputers. With the advent of tools for parallel disks programming, such as MapReduce, STXXL and Roomy  that allow the developer to focus on higherlevel algorithms  the programmer productivity for memoryintensive programs has increased manyfold. However, such parallel tools were primarily targeted at iterative programs. We propose a programming model for migrating recursive RAMbased legacy algorithms to parallel disks. Many memoryintensive symbolic algebra algorithms are most easily expressed as recursive algorithms. In this case, the programming challenge is multiplied, since the developer must restructure such an algorithm with two criteria in mind: converting a naturally recursive algorithm into an iterative algorithm, while simultaneously exposing any potential data parallelism (as needed for parallel disks). This model alleviates the large effort going into the design phase of an external memory algorithm. Research in this area over the past 10 years has focused on perproblem solutions, without providing much insight into the connection between legacy algorithms and outofcore algorithms. Our method shows how legacy algorithms employing recursion and nonstreaming memory access can be more easily translated into efficient parallel diskbased algorithms. We demonstrate the ideas on a largest computation of its kind: the determinization via subset construction and minimization of very large nondeterministic finite set automata (NFA). To our knowledge, this is the largest subset construction reported in the literature. Determinization for large NFA has long been a large computational hurdle in the study of permutation classes defined by token passing networks. The programming model was used to design and implement an efficient NFA determinization algorithm that solves the next stage in analyzing token passing networks representing two stacks in series.