Loading...

ADVANCES IN COMPUTER SCIENCES (ISSN:2517-5718)

Scott’s Qualitative Fixed Point Technique in Complexity Analysis of Algorithms

Maria Lopez-Ramirez1, Oscar Valero2 *

1 Juniper Innovating Travel Technology,  Gremis Fusters, 33, Balearic Islands, Spain
2 Department of Mathematical Sciences and Informatics, University of the Balearic Islands, Ctra, Baleares, Spain

CitationCitation COPIED

Ramírez ML, Valero O. Scott’s Qualitative Fixed Point Technique in Complexity Analysis of Algorithms. Adv Comput Sci. 2018 Jan;1(2):101

Abstract

In 1972, D.S. Scott developed a qualitative mathematical technique for modeling the meaning of recursive specifications in Deno-tational Semantics. In this paper we show that the same original Scott’s technique remains helpful for Asymptotic Complexity Analy-sis of algorithms requiring really a reduced number of hypotheses and elementary arguments. Thus, we will disclose that such a qualitative approach presents a uni ed mathematical method that is useful for Asymptotic Complexity Analysis and Denotational Semantics. More-over, we will emphasize the introduced technique applying the results to provide the asymptotic complexity (upper and lower bounds) of the running time of computing of a celebrated algorithm.

2010 AMS Classification: 47H10, 54F05, 68N30, 68Q55, 68Q25, 68W40

Keywords

Partial Order; Scott; Kleene; Fixed Point; Denotational Specification; Algorithmic Complexity; Recurrence Equation 

Introduction

In 1972, D.S. Scott developed a mathematical technique for modeling the meaning of recursive specifications in Denotational Semantics. The programming languages used in order to implement such specifications allow, in general, the use of recursive definitions (denotational specifications) in such a way that the meaning of the aforesaid definitions is expressed in terms of its own meaning. Nowadays, the aforementioned mathematical tool is known as fixed point induction principle. Such a principle is based on fixed point theory, the Kleene’s fixed point theorem, for monotone self-mappings defined in partially ordered sets (for a detailed treatment of the topic we refer the reader to [1-4]. Concretely, Scott’s induction principle states that the meaning of a recursive specification is obtained as a fixed point of a non-recursive mapping, induced by the denotational specification, which is the supremum of the successive iterations of the aforesaid non-recursive mapping acting on a distinguished element of the model. The non-recursive mapping expresses the evolution of the program execution. Besides, the partial order encodes some computational information notion so that each successive iteration of the mapping matches up with an element of the mathematical model which is greater than (or equal to) those that are associated to the preceding steps of the program execution. Of course, it is assumed that each iteration of the program computation provides more information about the meaning of the algorithm than those executed before. Therefore, the mentioned fixed point encodes the total information about the meaning provided by the elements of the increasing sequence of successive iterations and, in addition, no more information can be extracted by the fixed point than that provided by each element of such a sequence.

The Scott’s fixed point principle have been applied successfully to model computational processes that arise in a natural way in two fields of Computer Science different from Denotational Semantics. Concretely, it has been applied to Asymptotic Complexity Analysis [5] in and to Logic Programming in [6]. In the sequel we focus our attention on the application yielded to Asymptotic Complexity Analysis. In 1995, M.P. Schellekens showed in that the seminal Scott idea of modeling the meaning of a denotational specification as, at the same time, the fixed point of a non-recursive mapping and, in addition, the supremum of the successive iterations sequence can be adapted to Asymptotic Complexity Analysis of algorithms. Thus, Schellekens developed a fixed point technique to get asymptotic upper bounds of the complexity of those algorithms whose running time of computing satisfies a recurrence equation of Divide and Conquer type in such a way that the Scott spirit was preserved. The Shellekens technique has a fundamental difference from Scott’s one. Concretely, the fixed point principle is now based on the Banach fixed point theorem instead of Kleene’s fixed point theorem. It must be pointed out that the method of Schellekens introduces a \distance” function, in fact a quasi-metric, which yields a measure of the degree of ap-proximation of the elements that make up the model and, besides, encodes at the same time the information partial order. Moreover, in contrast to Scott’s qualitative technique, the new method provides a unique fixed point of the self-mapping as the candidate running time of computing of the algorithm (the counterpart of the meaning of the denotational specification in the Scott case). The quantitative data approach of the Schellekens technique was seen as an advantage over the Scott approach and inspired many works on the mathematical foundations of Asymptotic Complexity Analysis (see, for instance, [7-16]).

Although the preceding quoted works argue that the Schellekens quantitative approach is very appropriate for Asymptotic Complexity Analysis and illustrate its strengths, the main target of this paper is to make a plea for the original Scott fixed point principle showing its utility for determining the upper and lower asymptotic bounds for those algorithms whose running time of computing fulfills a general recurrence equations. Thus, we will show that the Scott approach remains valid for both, Denotational Semantics and Asymptotic Complexity Analysis.

The remainder of the paper is organized as follows: In Section 2 we recall brie y the basic notions from asymptotic complexity of algorithms that we will play a central role in our subsequent discussion. Section 3 is devoted to showcase the utility of Kleene’s fixed point theorem, and thus the Scott fixed point approach, for obtaining the upper and lower bounds of the complexity of those algorithms whose running time satisfies a general recurrence equation. All this will be made requiring really a reduced number of hypotheses and elementary arguments. Finally, we will illustrate the developed technique applying the results to provide the asymptotic complexity (upper and lower bounds) of the running time of Quick sort.

Preliminaries

In this section we recall, on the one hand, the celebrated Kleene fixed point theorem and the pertinent notions about partial ordered spaces that are necessary in order to state it. On the other hand, we remember the mathematical basics about Asymptotic Complexity Analysis of algorithms that will play a central role in Section 3, where we will apply the Scott methodology to determine asymptotic bounds for the running time of computing.

The Kleene fixed point theorem

Following [1] a partially ordered set is a pair (X,≤ X ) such X is a nonempty set X and ≤ X is a binary relation on X which fulfills for all x, y, z € X the following:
(i) x ≤ x x (reflexivity);
(ii) x ≤ X y and y≤ X x xy ⇒ = (anti symmetry);
(iii) x ≤ X y and y≤ X (transitivity).

If (X≤ x) is a partially ordered set and Y ⊆X, then an upper bound for Y in (X≤ x) is an element x ϵ X such that y≤ X x for all y ϵ Y . An element z ϵ Y is least in (Y ≤ X) provided that z ≤ x x for all x ϵ Y. Thus, the supremum for Y in (X; ≤ X), if exists, is an element z ϵ X which is an upper bound for Y and, in addition, it is least in the set (UB(Y ); ≤ X ), where UB(Y ) = {u ϵ X : u is an upper bound for Y}. In addition, fixed x ϵ X, the sets {y ϵ X: x ≤ x y} and {y ϵ X: y ≤ x x} will be denoted by ↑≤ x x and by ↓≤ x x, respectively.

On account of [17] a partially ordered set (X; ≤ X) is said to be chain complete provided that every increasing sequence has supremum. A sequence (x n)nϵN is said to be increasing whenever x n ≤ Xxn+1 for all n ϵ N, where N denotes the positive integer numbers set. Moreover, a mapping from a partially ordered set (X; ≤ X) into itself is monotone if f(x) ≤ x f(y) whenever x ≤ x y. Furthermore, a mapping from a partially ordered set (X; ≤ X) into itself is said to be ≤ X -continuous if the supremum of the sequence (f(x n))nϵN is f(x) for every increasing sequence (x n)nϵN whose supremum exists and is x. Observe that every X -continuous mapping is always monotone with respect to ≤ X , i.e., f(x) ≤ X f(y) whenever x ≤ X y. In the following, given a mapping from a partially ordered set (X; ≤ X) into itself, we will denote the set {x ϵ X: f(x) = x} by F ix(f).

Taking into account the above concepts, Kleene’s fixed point theorem can be stated as follows (see, for instance, [17]):

Theorem 1: Let (X; ≤ x) be a chain complete partially ordered set and let f: X →X be an ≤ x -continuous mapping. Then the following assertions hold:

1. If there exists x ϵ X such that x ≤ x f(x), then there exists

x * ϵ Fi x(f) such that x * ϵ ↑≤ x x

 x * ϵ Fi x(f) such that x * ϵ ↑≤ x x 2. If there exists y ϵ X such that f(y) ≤ xy and y ϵ ↑≤ x x, then x *≤ x y

Fundamentals of Asymptotic Complexity Analysis

We follow [18] as main reference for Asymptotic Complexity Analysis of algorithms.

The complexity of an algorithm is determined to be the quantity of re-sources required by the algorithm to get a solution to the problem for which it has been designed. A typical resource is the running time of computing. Usually there are often a few algorithms that are able to get the solution to a fixed problem. So one of goals is to set which of them solve the problem taken lower time. With this aim, it is needed to compare their running time of computing. This is made by means of the asymptotic analysis in which the running time of an algorithm is denoted by a function T: N→]0, ∞] in such a way that T(n) matches up with the time taken by the algorithm to solve the problem, for which it has been designed, when the input data is of size n. Notice that we exclude 0 from the range of T because an algorithm always takes an amount of time in order to solve the problem for which it has been designed.

Since the running time does not only depend on the input data size n, but it depends also on the particular input and the distribution of the input data. Hence, it is usually distinguished three possible behaviors when the running time of an algorithm is discussed. These are the so-called best case, the worst case and the average case. The best case and the worst case for an input of size n are defined by the minimum and the maximum running time of computing over all inputs of size n, respectively. The average case for an input of size n is defined by the expected value or average running time of computing over all inputs of size n.

Generally, fixed an algorithm, to state the exact expression of the function which provides its running time of computing is a hard task. For this reason, in most situations the analysis is focused on bounding the running time of computing and, thus, to yield an approximation of it. Of course, to approximate the running time of computing we can consider an upper and lower bound which are given by the so-called O and asymptotic complexity classes. Next we recall such notions. To this end, from now on, ≤ will stand for the usual partial order on ]0, ∞]. In the sequel, we will denote by T the set {f: N→]0, ∞]: f is monotone with respect to and unbounded. Observe that if a function f models the running time of computing of an algorithm, then f is expected to be monotone and unbounded.

Consider two functions f, g ϵ T. Then f ϵ O(g) (f ϵ Ω (g)) if and only if there exist n0 ϵ N and c ϵ]0, ∞ [ satisfying f(n) ≤ cg(n) (cg(n)≤f(n)) for all n ϵ N with n≥ n0 . Moreover, the case in which f ϵ O(g) 
n Ω(g) is denoted by f ϵ (g).

Clearly if f ϵ T describes the running time of computing of an algorithm under discussion, then the fact that f ϵ O (g) gives that the function g represents an asymptotic upper bound of the running time. Therefore if we do not know the exact expression of the function f, then the function g provides an approximate information of the running time of computing for each input size n, f(n), in the sense that the algorithm takes a time to process the input data of size n bounded above by the value g(n). Clearly, a similar interpretation can be given when we consider f ϵ (g).

 We end the section noting that the set τ becomes a partially ordered set when we endow it with the partial order ≤τ given by f≤ τ g ⟺ f(n) ≤ g(n) for all n ϵ N.

The Scott fixed point technique applied to Asymptotic Complexity Analysis

In this section we show the utility of Kleene’s fixed point theorem, and thus of the Scott fixed point technique, as a mathematical tool for the analysis of the asymptotic complexity of algorithms.

Usually the analysis of the running time of computing of algorithms leads up recurrence equations on N of the following type:


where d ϵ T such that d(n) < ∞ for all n ϵ N, n0 , k ϵ N and ci , aj ϵ]0, ∞ [ for all i = 1,……….,n0 and j = 1,……….,k.

However, a class of recurrence equation that differs from those of type (1) and that arise in a natural way in the analysis of the running time of computing of many Divide and Conquer algorithms are the so-called multi term master recurrences (see, for instance, [19]). The aforementioned recurrence equation is given as follows: 

where d ϵ T such that d(n) < ∞ for all n ϵ N, n0 , k ϵ N, and c, aj ϵ]0, ∞ [ for all i = 1,……….,n0 and j = 1,…….,k.

According to [20] the functions d in the preceding expressions are called forcing or input functions.

Examples of algorithms whose running time fulfills a recurrence equation of the above introduced types (either (1) or (2)) are the celebrated Quick sort (worst case), Merge sort (average case), Hanoi Towers Puzzle, Large two (average case) and Fibonacci (see, for instance, [5,18,20,21]). Notice that among the aforementioned algorithms there are recursive and non-recursive.

In the classical literature many techniques have been developed for obtaining the asymptotic bounds of those algorithms whose running time of computing satisfies the preceding recurrence equations. In general, such techniques are specific for each case under study and involve tedious and hard arguments either from mathematical induction or from calculus involving integrals or limits. A general view of the classical treatment of the topic can be found in [18,22].

In what follows we develop a method based on the Kleene fixed point theorem which allows to determine asymptotic bounds in those cases in which the running time of computing satisfies a general recurrence equation which retrieves as a particular case the recurrence equations of type (1) and (2). The new method does not intend to compete with the standard techniques to analyze the complexity of algorithms based on the classical arguments.

The authentic purpose of the novel method is to introduce a formal treatment of asymptotic complexity by means of really basic and elementary arguments which provide, in some sense, a fixed point theoretical counterpart of the classical techniques in the same way that Scott’s fixed point theorem made in Denotational Semantics (see [2-4]). Besides the new method presents the advantage, on the one hand, of avoiding to assume a “convergence condition” in the spirit of Schellekens for all functions involved, that is  (we refer the reader for a detailed treatment of the topic to [5,15]), and, on the other hand, of preserving the Scott spirit and being able to compute, in a natural way and simultaneously, both the meaning and the running time of computing of recursive algorithms.

The technique

Let k ϵ N and let gi : N → N be an unbounded monotone function with respect to the partial order ≤ such that g(n) < n for all i = 1,…..,k
Fix n0 ϵ N and denote by Nn0 the set {n ϵ N : n > n0 }. Assume that ϕ: Nn0 ]0, ∞]k →]0, ∞] is an unbounded monotone function with respect to the partial order ≤, where ≤ is defined point-wise, i.e., (x1 .......... xk+1)≤ (y1 ,…………, yk+1) ⟺ xi ≤ yi for all i = 1,…….,k + 1. 

Next fit x c1 ,………..,ck ϵ]0, ∞ [. Consider the general recurrence equations on N given for all n ϵ N by

Clearly the recurrence equations of type (1) and (2) are obtained as a particular case of recurrence equations (3) when

for all n ϵ Nn0 and for all x; x1 ,……………, xk ϵ]0, ∞ [and, in addition, the implicated functions gi are chosen respectively as follows:

1. gi (n) = n-i for all n ϵ Nn0 and for all i = 1; : : : ; k,

Another family of recurrence equations that arise in a natural way in the asymptotic analysis of algorithms is the given as follows: 

where the constants a1 ,………..,ak have been replaced by functions ai : Nn0→]0, ∞ [ for all i = 1,………., k. Clearly, the preceding recurrences equations can be retrieved from (3) taking the function as follows:

for all n ϵ Nn0 and for all x; x1 ,………….,xk ϵ]0, ∞ [. Of course the one-dimensional case appears often in the literature (see [23]), i.e., when k = 1 and, hence,

With a: Nn0→] 0, ∞ [and d ϵ τ

An illustrative example of those algorithms whose running time of computing satisfies a recurrence of type (7) is the Quick sort (average behavior). Indeed, its running time is the solution to the recurrence equation given below (see, for instance, [20,22]):

With the aim of getting a technique able to provide asymptotic bounds of the running time of computing of algorithms whose analysis yields the pre-ceding recurrence equations, we first stress that every recurrence equation of type (3) has trivially always a unique solution provided the initial conditions c1 ,…………,cn and taking into account that Φ and g1 ,………,gk are functions (and thus they give a unique image for a given input). So we only need to focus our attention on how we can get a bound for such a solution without knowing its specific expression. To this end, denote by τnoc the subset of τnoc given by τ noc = {f ϵ τ : f(n) = cn for all n ≤n0 }.

Now, de ne the functional FΦ: n c → τ noc by


for all f ϵ τ noc . It is clear that a function belonging to o n c τ is a solution to the recurrence equation (3) if and only if it is a fixed point of the functional FΦ.

Taking into account the preceding notation we show that the fundamental assumptions that are required by the Kleene fixed point theorem, chain completeness and order-continuity, are satisfied in our approach.

Proposition 2:  is chain complete.

Proof: Let (fm)m ϵ N be an increasing sequence in ( τ noc ≤τ ) Then the function f ϵ τ noc given by f(n) = sup{fm(n) : m ϵ N}

for all n ϵ N is the supremum of (fm)m ϵ N in (τ noc ≤τ). Clearly if fm ϵ τ noc

for all m ϵ N, then fm ϵ τ noc

Theorem 3:

Let n0 ϵ N and let Φ be the unbounded monotone function associated to (3). If there exists K ϵ]0, ∞ [such that 

for all n ϵ Nn0 and for all x1 ,………..,xk ; “ ϵ]0, ∞[, then FΦ is ≤τ -continuous.

Proof: Suppose that (fm)m ϵ N is an increasing sequence in. Then Proposition 2 guarantees that the function fm ϵ τnoc given by 

for all n ϵ N is the supremum of (fm)m ϵ N. Next de ne the function fm ϵ τnoc by

Proof: Suppose that  is an increasing sequence in  Then Proposition 2 guarantees that the function given by

for all n ϵ N is the supremum of (fm)m ϵ N. Next de ne the function. Next de ne the functionby


Since Φ is monotone with respect to ≤ and fm ≤τ f for all m ϵ N we obtain that for all n ϵ Nn0 . It follows that 

for all n ϵ Nn0 . Hence 

It remains to prove that . To this end, we can assume without loss of generality that f Φ(n) < ∞ for all n ϵ Nn0, since otherwise, by inequality (11), we deduce that .

Then, given mε ϵ]0, ∞ [, there exists such that

Whence we deduce that FΦ(f)(n) ≤fΦ(n) for all n ϵ Nn0. So FΦ(f) τ f Φ.

Therefore we conclude that FΦ(f) = fΦ and, thus, that FΦ is ≤τ continuous.

Notice that the functions Φ given by (4) and (6) are unbounded monotone and, in addition, they satisfy the regularity condition (10).

The next example proves that only the monotony of does not provide the ≤ τ -continuity of the function FΦ.

Example 4: Fix n0 ϵ N and c1 ,……,c ϵ]0, ∞ [. Let g1 , g2 : N→N be two unbounded monotone functions with respect to ≤ such that gi (n) < n for all i = 1; 2. Consider the function ϕ: Nn0 X ]0, ∞ ] →]0, ∞ ] given by

It is obvious that ϕ is monotone with respect to ≤. Moreover, ϕ does not satisfy the regularity condition (10), since there is not K ϵ]0, ∞ [such that

Besides, the sequence (F ϕ (fm))m  ϵ N is increasing in (Tτ,c ≤ τ ) and its supremum is again the function f. Nevertheless, F ϕ (f) ≠ f

The next example shows that only the regularity condition (10) for is not enough in order to assure the ≤ o n -continuity of the function F.

Example 4: Fix n0 ϵ N and c1 ,……,c ϵ]0, ∞ [. Let g1 , g2 : N→N be two unbounded monotone functions with respect to ≤ such that gi (n) < n for all i = 1; 2. Consider the function ϕ: Nn0 X ]0, ∞ ]2 →]0, ∞ ] given by

It is evident that satisfies the regularity condition (10), since 

for all n ϵ Nn and for all x1 , x2 , ϵ ]0, ∞ ]. Moreover, it is clear ϕ is not monotone with respect to ≤. Indeed,

for all x1 , x2 , ϵ ]0, ∞ ]. It follows that Fϕ is not monotone with respect to ≤ and, thus, it is not ≤τ - continuous.

The next example yields that the ≤ n  - continuity of Fϕ does not guarantee that the function fulfills the condition (10).

Example 6: Fix n0 ϵ N and c1 ,……,c no ϵ]0, ∞ [. Let g1 , g2 : N→N be two unbounded monotone functions with respect to ≤ such that gi (n) < n for all i = 1; 2. Consider the function ϕ: Nn0 X]0, ∞ ]2 →]0, ∞ ] given by 

It is evident that the function Fϕ is ≤ ∞ -continuous. However, the function ϕ does not satisfy the regularity condition (10). Indeed, assume with the purpose of contradiction that the aforesaid condition is hold. Then, there exists K ϵ]0, ∞ ]such that 

for all n ϵ Nn0 and for all x1 , x2 , ϵ]0, ......... ]It follows that x1+x2 + ε <K

for all x1 , x2 , ϵ]0, ∞ ]. Set x1= x2 = K/2 . Then, from the preceding inequality, we deduce that K+ε

This is impossible. So, we conclude that does not satisfy the regularity condition (10).

Once we have guaranteed that in our framework the main components required to apply the Kleene fixed point theorem are hold, we introduce the new methodology to provide asymptotic complexity bounds for those algorithms whose running time of computing fulfills a recurrence equation of type (3).

Theorem 7: Let n0ϵ N and let c1,........,cn02]0, ∞[. Assume that Φ: Nn0 X ]0, ∞ ]k →]0; ∞ ] is an unbounded monotone function with respect to ≤ which satisfies the regularity condition (10) and that f is the solution to the recurrence equation of type (3). If there exists g ϵ τ noc such that g ≤ τ noc  Fϕ (g), then f ϵ Ω(g). Moreover, if there exists h ϵ τ no,csuch that h ϵ↑ , τ noc  g and Fϕ (h) h, then f ϵ O(h).

Proof: : First of all, note that f is the unique solution to (3) and, thus, the unique fixed point of Fϕ in , τ noc. Suppose that there exists g ϵ τ no,c such that g ≤ τ noc Fϕ(g). Since ( τ no,c  ≤τ ) is chain complete and ϕ is ≤τ no,c - continuous we have, by assertion 1) in the statement of Kleene’s fixed point theorem (Theorem 1), that there exists a fixed point f* ϵ τ no,c of Fϕ such that f* ϵ↑≤ τ no,c g and, hence, that f* ϵ Ω (g). The fact that Fϕ admits only a unique fixed point gives that f = f* and that f ϵΩ(g).

Suppose that there exists h ϵ τ noc such that h* ϵ↑≤τ noc g and F (h) ≤ τ noc h. Assertion 2) in the statement of Theorem 1 provides that f≤ τ no,c h. Therefore, f ϵ O(h).

In the light of the preceding result we draw in the inference that in order to get the asymptotic bounds of an algorithm whose running time of computing satisfies a recurrence equation of type (3), it is enough to search the bounds among those functions in τ no,c that are “post- fixed “point of Fϕ (g τ no,c  F (g)) and those that are a “pre- fixed” point of Fϕ (Fϕ (h)≤  τ noc h). Moreover, the condition in the statement of Theorem 7 that states a relationship between the post fixed point g and the pre-fixed point h, that is h ϵ ↑, g, points out that really the upper bound class O(h) must be included in the lower bound class Ω (g) which is reasonable from a complexity theory viewpoint.

An illustrative example

Let us consider Quicksort. According to its running time of computing fQ, for the average behavior, is the solution to the recurrence equation (12), i.e., to the following recurrence [22]:

where c ϵ]0, ∞[.


As indicated in Subsection 3.1, the preceding recurrence equation is retrieved from (3) when we consider:

k =1, c1,= c and n0 2.


Notice that ϕ (n, x + ϵ) < ϕ(n, x)+ K ϵ for all K ϵ]2, ∞[, n ϵ N2 and x ϵ ]0, ∞[.

In the light of the preceding facts we have that all assumptions required in the statement of Theorem 7 are hold. With the aim of making clear the use of the technique yielded by the aforesaid theorem we verify that the asymptotic bounds known in the literature are retrieved from our approach. To this end, let fQ be the solution to the recurrence equation (12).


By Theorem 7 we deduce that fQϵƟ(f) which immediately gives the well-known asymptotic bound fQϵƟ(flog) (see, for instance, [24]),  where flog ϵ1,c with


We end this subsection pointing out that Scott’s technique presents a unified mathematical approach useful at the same time for Asymptotic Complexity Analysis and Denotational Semantics. Thus, for instance, such a method allows to model simultaneously both, the meaning and the running time of computing of recursive algorithms using recursive denotational specifications. An easy, but representative, example to which the technique could be applied is given by an algorithm computing the factorial function by means of a recursive specification whose meaning satisfies the following de-notational specification (14) and its running time of computing fulfills the following recurrence equation (15):


where c, d > 0.

Conclusion

In 1972, D.S. Scott developed a qualitative mathematical technique for modeling the meaning of recursive algorithms in Denotational Semantics. We have shown that the same original Scott’s technique remains valid for Asymptotic Complexity Analysis of algorithms. So we have seen that such a qualitative approach presents a unified mathematical method that is useful for Asymptotic Complexity Analysis and Denotational Semantics. This fact presents an advantage over the quantitative technique, based on Banach’s fixed point theorem, introduced in 1995 by M.P. Schellekens because such a technique has been designed mainly for Asymptotic Complexity Analysis. Moreover, the use of the qualitative approach agrees with the qualitative character of the complexity analysis of algorithms in Computer Science, where to provide the asymptotic behavior of the running time is more important than to get the exact expression of the running time itself. Further-more, using the qualitative framework we avoid to require the convergence condition” assumed by the quantitative Schellekens approach whose unique purpose is to guarantee the soundness of a distance, which provides the quantitative information, but it has not a priori a computational motivation.

Acknowledgement

This research was funded by the Spanish Ministry of Economy and Competitiveness under Grants TIN2014-56381-REDT (LODISCO) and TIN2016-81731-REDT (LODISCO II) and AEI/FEDER, UE funds.

References

  1. Davey BA, Priestley HA. Introduction to Lattices and Order. Cambridge University Press, Cambridge; 1990.
  2. Scott DS. Outline of a mathematical theory of computation, in: Proc. 4th Annual Princeton Conference on Information Sciences and Systems. 1970; 169-176.
  3. CA Gunter, DS Scott. Semantic domains, in: Handbook of Theoretical Computer Science, ed. by J van Leewen. 1990; Vol. B; MIT Press, Cambridge: 663-674.
  4. Stoy JE. Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory, MIT Press, Cambridge; 1977.
  5. Schellekens MP. The Smyth completion: a common foundationfor the denotational semantics and complexity analysis. ElectronNotes Theor Comput Sci. 1995; 1: 535-556.
  6. Hitzler P, Seda AK. Mathematical Aspects of Logic Programming Semantics. CRC Press, Boca Raton; 2011.
  7. Alghamdi MA, Shahzad N, Valero O. Fixed point theorems in generalized metric spaces with applications to Computer Science. Fixed Point Theory A. 2013; 2013: 118.
  8. Alghamdi MA, Shahzad N, Valero O. On fixed point theory intopological posets, extended quasi-metrics and an applicationto asymptotic complexity analysis of algorithms. Fixed PointTheory A. 2015; 2015: 179.
  9. Uguet MAC, Schellekens MP, Valero O. The Baire partial quasimetric space: a mathematical tool for the asymptotic complexityanalysis in Computer Science. Theor Comput Syst. 2012; 50:387-399.
  10. Garcia RLM, Romaguera S, Schellekens MP. Applications ofthe complexity space to the general probabilistic Divide andConquer algorithms. J Math Anal Appl. 2008; 348: 346-355.
  11. Z Mohammadi, O Valero. A new contribution to fixed point theoryin partial quasi-metric spaces and its applications to asymptoticcomplexity analysis of algorithms. Topol Appl. 2016; 203: 42-56.
  12. Lopez JR, Schellekens MP, Valero O. An extension of the dualcomplexity space and an application to Computer Science. TopolAppl. 2009; 156: 3052-3061.
  13. Romaguera S, Schellekens MP. Quasi-metric properties ofcomplexity spaces. Topol Appl. 1999; 98: 311-322.
  14. Romaguera S, Schellekens MP, Valero O. The complexity spaceof partial functions: A connection between Complexity Analysisand De-notational Semantics. Int J Comput Math. 2011; 88:1819-1829.
  15. Romaguera S, Tirado P, Valero O. New results on mathematicalfoundations of asymptotic complexity analysis of algorithms viacomplexity spaces. Int J Comput Math. 2012; 89: 1728-1741.
  16. Romaguera S, Valero O. A common mathematical frameworkfor asymptotic complexity analysis and denotational semanticsfor recur-sive programs based on complexity spaces. AdvanTheories Math Models. 2012; 1: 99-120.
  17. Baranga A. The contraction principle as a particular case ofKleene’s fixed point theorem. Discrete Math. 1991; 98: 75-79.
  18. G Brassard, P Bratley. Algorithms: Theory and Practice, Prentice Hall, New Jersey; 1988.
  19. M Kao. Multiple-size divide-and-conquer recurrences. ACMSIGACT News. 1997; 28: 67-69.
  20. Cull P, Flahive R, Robson R. Difference Equations: from Rabbitsto Chaos. Springer, New York; 1985.
  21. Blum M, Floyd R, Pratt V, Rivest R, Tarjan R. Time bounds forselection. J Comput Syst Sci. 1973; 7: 448-461.
  22. Cormen TH, Leiserson CE, Rivest RL. Introduction to Algorithms,MIT Press. New York; 1990.
  23. Wang X, Fu Q. A frame for general divide-and-conquerrecurrences. Inform Process Lett. 1996; 59: 45-51.
  24. R Miller, L Boxed. Algorithms Sequential and Parallel: A UnifiedApproach (Charles River Media) Hingham; 2005.