SlideShare una empresa de Scribd logo
1 de 21
Descargar para leer sin conexión
pg. 1
An introduction to Packed Computation as a new
powerful approach to dealing NP-Completes
Mr. Kavosh Havaledarnejad Icarus.2012@yahoo.com.
Abstract:
This text is a guideline booklet for solving NP's, NP-Completes and some NP-Hards by reducing (
mapping ) to RSS problem and using Packed Computation. This text is outline of a broad research
conducted from 2008 to now 2014. RSS ( k, 2 - CSP ) is a NP-Complete problem as other NP-
Completes can reduce to it. RSS problem can solve very fast ( at least in average case ) by many
algorithms defined by Packed Computation Idea. Researcher tried many months day and night to
creating a mathematical proof showing algorithms are polynomial or exponential and tested many
ideas and visions but this process is not successful yet. Whole of this text devoted to RSS and Packed
Processing. However aim is proposing a new approach in designing algorithms.
Keywords: Time Complexity, Complexity Classes, , Exact Algorithms, Combinatorics,
Randomized Algorithms, Rules States Satisfiability Problem, Polynomial Reductions, Packed
Computation, Packed State Process, Packed State Stochastic Process
1. Introduction
In this paper we will introduce a new powerful approach in designing algorithms specially for
dealing NP-Complete problems. Also we introduce a new NP-Complete problem and introduce four
algorithms for solving this new problem. Experimental results show that for average case ( random
generated objects ) these algorithms are very fast and have low complexity. It is not clear that these
algorithms for this NP-Complete problem are generally polynomial or exponential. One may prove
that these are polynomial ( NP = P ) or prove that they are exponential ( there are counterexamples).
However we focus in proposing this new powerful device to dealing hard problems. In practical view,
Researching regarding Optimization problems and NP-Hards is not a true approach to finding equality
of ) or polynomial algorithms for NP-Problems because researcher never knows algorithm
will reach to the best result or is only a greedy that reaches to a local optimum. Researching regarding
Decision problems we don’t know they have at least one result or not is not a true approach too,
because researcher never knows: If algorithm returns "NO" , algorithm is not correct or problem have
not result. The approach which used among this research was researching regarding NP-Problems we
know they have at least one result. First researcher was working on Map Coloring problem. It is a
special case of RSS problem and it is a NP-Complete and it is a special case of Graph Coloring but,
surprisingly we know: "If the graph be Planar then problem have at least one result". Also there is
such a property for RSS instances. In 2.3 we explain how we can produce a random RSS instance as
we be sure that it has at least one result. In such a case if algorithm returns YES we know algorithm is
correct and if algorithm returns NO we know algorithm is not correct. However we can solve
Optimization Problems. We can transform Maximum Clique to RSS and then solve it. Among this
research 4 principles was observed. 1- Try to test more and more instances. 2- Try to designing new
ideas to stochastic generating more and more difficult and none-symmetric random instances as worth
cases. 3- Designing algorithms be able to cover all of them. 4- Mathematical Analysis for designing
algorithm for establish contradiction examples. Final step is the step that can causes mathematical
proof for equality of but this process is not yet successful. The method which used to
finding algorithms for solving NP-Completes very fast was itself a local search hill climbing method
handled by researcher. Every algorithm has some neighbors that a few differ from the prime algorithm
that some of them are acceptable and some not. And some of them may be simpler than prime
algorithm. Tues we can find the algorithm that has the best performance or we can find simplest
algorithm for mathematical proof that researcher believe is PSP-Alpha. Chapter 2 completely reviews
Rules States Satisfiability ( RSS ) problem and its reductions. A RSS problem instance consists of
several rules that each rule consist of several states. Every rule can stands in only one single of its
pg. 2
states. In two different rules, two different states of them have either connection or conflict ( have not
connection ). Question is: Is there a collection of states in rules as we choose only one single state in
each rule as all of them, be connected to each other. An instance of RSS problem in fact is an instance
of Maximum Clique in a multipartite graph that its nodes are in several clusters ( Independent Sets ).
As in each cluster nodes are not connected to each other. We call every cluster a rule and call its
nodes, states. Only one difference exists between RSS instances and Maximum Clique instances:
When we are solving a RSS Instance we note that, which states are in a same rule but when we are
solving Maximum Clique instances we don’t note that, which nodes are in a same cluster. In fact we
forget clusters even if they ever exist. In another vision, A General Boolean Satisfiability instance
includes disjunction of several Boolean functions. This instance will Satisfy when all Boolean
Functions Satisfy. When a Boolean function has n Boolean variable, it can stands in different
states that some of them are acceptable and some not based on function. Based on intersection of
variables between different Boolean functions, some of these states in different functions have
conflict and some not. We call two states in two functions connected when they have not conflict. We
call Boolean functions, Rules. Then there is a reduction from General Boolean Satisfiability instances
to RSS instances. Chapter 3 introduces a new idea in designing algorithm, a new family of algorithms
namely Packed Computation. When an elements of a problem consists of several states that a correct
result can select only one single of them, An algorithm based on Packed Computation is the algorithm
that sometimes select only one single of states which is called single state but sometimes select
several states which is called packed state. It is like that when I am in home eating breakfast, I be in
university studying and I be in car driving. In other word classical search algorithms only search and
revisit candidate results of problem but a packed computation search algorithm revisits some
condition between them ( combination of them ). In chapter 3.1 some exact polynomial algorithms
based on this approach exists. In chapter 3.2 some randomized algorithms exist. They are Stochastic
Processes and Markov Chains ( see [3] & [6] ) They are Multivariate Markov Chain ( see [8] ) . When
an algorithm is randomized, it means its behavior depends on sequence of random numbers that are
injected to it and however there are some such sequences that algorithm don’t work for them. Tues
always there exist a very law probability that algorithm don’t reaches to result. But however we can
prove that they have a polynomial expectation ( mathematical mean ) in [6] Chapter 4.5.3 Sheldon M.
Ross introduces such a mathematical proof to show that expected number of steps to reach out to a
result for a local search algorithm using 2-SAT problem is polynomial and = . We call such
algorithms Zero Error and say they are in ZPP complexity class. In chapter 4 complexity tables of all
algorithms can be found. Experimental results show that all algorithms have a complexity near
polynomial at least in average case in other word they able to solve all instances in very fast. However
an algorithm with exponential running time can be more efficient than a polynomial algorithm, for
moderate instance sizes. For problems with small dimensions, an algorithm with complexity of
O( ) is faster than O( ) (see [7] Introduction). However when a complexity is exponential in
infinity will be bigger than another polynomial complexity algorithm. Importance of polynomiality of
algorithms is when we deal with very large instances. However algorithms that proposed in this paper
have good performance in practice. Exact algorithms that will introduce in this paper are both
polynomial thus for proving NP = P by this algorithm, One must prove that they are correct. But two
other Randomized algorithms have any limited and they will terminate whenever they reaches to a
result thus for proving NP = ZPP by this algorithms one must prove that their mean running time is
polynomial. However analysis of these algorithms to showing they are polynomial or exponential is a
hard task and an open conjecture. Here we only propose them.
1.1. Research Story
The Primitives algorithm which were implemented and tested by researcher was Energy Systems.
They were the systems that worked using real numbers defined between 0 and 1 as a type of
uncertainty. It seems is like Neurocomputation. They worked out a great number of random inputs but
there existed some instances that they couldn't solve them so researcher began to combining ideas
with randomized local search algorithms. Outcome was a system which worked by real numbers but
in randomized computations. Then researcher eliminated this type of uncertainty of real numbers and
designed the systems which used 3 levels: 0, 1 and 0.5. Along these researches researcher was
working on General Boolean Satisfiability problems and Map Coloring. Then researcher reduced
pg. 3
Boolean SAT to RSS problem and algorithms which used a multi selection for states as a type of
uncertainty. But algorithms were yet randomized. Finally researcher designed some exact algorithms.
2. RSS Problem
This chapter devoted to RSS Problem and other NP-Complete problems totally. We'll start off
illustrating NP-Complete problems then purpose RSS-Problem as a new NP-Complete problem by
reduction from General Boolean Satisfiability and then pursue the scope by some reductions from
other NP-Complete problems to RSS Instances. All NP-Complete problems can reduce or transform
to each other in a process that takes a polynomial time and all of them can reduce to RSS-Instances in
a process that takes a polynomial time. Tues If there exists, an algorithm for solving RSS-Instances in
polynomial time complexity then all we do is transforming other NP-Complete problems to RSS in a
process that takes a polynomial time and then solve it in another process that takes a polynomial time
( see [5] ). Tues prime problem can solve in polynomial computational steps. For example if we
transforms problem α to problem β then:
) ) )
2.1. NP-Completes and NP-Hards
In computational complexity theory, NP-Hard problems are a set of computational problems
introduced by Richard Karp in his paper in 1972 entitle "Reducibility among Computational
Problems" [2]. Richard Karp used Stephen Cook's 1971 theorem published in a paper entitles "The
Complexity of Theorem-Proving Procedures" [1] that the Boolean Satisfiability Problem is NP-
Complete. A problem ρ is in NP class if it is not polynomial solvable but a solution for it can be
polynomial verifiable. A problem ρ is in NP-Hard class if every problem in NP can reduces to it. A
problem is in NP-Complete class if it be in both classes ( See [9] ). Beauty of NP-Complete problems
is that they can map to each other in polynomial time. This process is called Reduction. In other word
an instance of a problem A can transform to a problem B if both be NP-Complete. There are many
reductions for NP-Complete and NP-Hard problems. In Karp's paper 21 NP-Hards and reductions
among them introduced (Fig.1).
Fig. 1 Reducibility among Karp's 21 NP-Hard Problems
Up until now, among these years, many efforts conducted to cope with NP-Completes and NP-Hards.
There are many heuristic, randomized, greedy and other types of algorithms for solving NP-
Completes and NP-Hards. Some of them use exponential time to solve problem. Some of them find a
very optimize result but don’t guarantee that always find the best. Some of them work good for many
pg. 4
instances of a problem but don’t work for some special cases. As an example in the book of
"Randomized Algorithms and Probabilistic Analysis" [3], Eli Upfal and Michael Mitzenmacher
introduced a randomized algorithm for solving Hamiltonian Cycle in chapter 5.6.2 that it have even a
mathematical proof that show the probability that algorithm doesn’t reach to a result for a random
problem instance and a random behavior of algorithm both together is bounded by:
2.2. Introducing RSS by reduction from General Boolean Satisfiability
In this session we introduces Rules States Satisfiability problem by a simple example.
Consider we have four Boolean variables namely: α, β, γ and δ then we want to Satisfy four different
logical functions on these variables. We call them A, B, C and D. We have:
) )
B: β δ
C: α δ
Function A simply, implies that only one of its operands α, β or γ can be One and others must be Zero.
We can show it by:
)
All of these functions must be Satisfied thus we can show all of these functions in a formula
) ) ) )
It is in fact an instance of Boolean Satisfiability problem. Question is finding a Satisfying assignment
that whole of the functions Satisfies. Simply we can find out that this instance of Boolean
Satisfiability have only one Satisfying assignment or a result that Satisfies it. It is:
But we know OnlyOne can stands only in 3 different states. They are :
and and we call them
respectively and . But function can stands only in 2 different states. They are
and we call them and . But function can stands only in 2 different states.
They are and we call them and . And finally function also
can stands only in 2 different states. They are and we call them
and . now we call functions : rules.
Whereas in rule in state then this state have contrary by state e in rule where .
Thus we see some states in some rules have contrary. When two states in two different rules have
contrary we say they have conflict but if not we say: they are connected. Tues we have a new
problem. We can show it in a diagram ( Fig. ? ):
pg. 5
Fig. ? Diagram of instance
In this new problem question is that we must choose only one single state in each rule as they have
not conflict with each other. This is an example of Rules States Satisfiability problem. In fact RSS is
similar to a Multivariate State Machine.
Definition 2.2.1. A RSS instance is a problem consists of rules which every rule consists of
states. In two different rules two states are connected or have conflict. Question is: Is there a
Satisfying assignment of one single state per each rule as all of them be connected to each other or
not? If answer is true: what is this assignment?
In fact we reduced a Boolean Satisfiability instance to a RSS instance. It is obvious if we can solve
this RSS instance then we solved the prime problem immediately. Now simply we can find out that
Result of RSS instance is . because they are all connected to
each other. Now if we transform this result to prime problem we have :
Thus we can write a 4 step algorithm for reducing Boolean Satisfiability instances to RSS instances.
Algorithm 2.2.1. Consider a General Boolean Satisfiability problem with Boolean variables and
Boolean functions defined on them.
Everything which a clause in 2-SAT problem do is canceling a case in problem. In this problem a bit
can stands only in zero state and one state. If we expand the number of states per each bit then 2-SAT
problem simply will convert to RSS problem. As each disconnection ( conflict ) obviously defines a
twofold canceling clause. In addition RSS problem is a special case of CSP ( Constrain Satisfaction
Problem ). As when in ( a, b )-CSP: a denotes the number of different values each variable can select
from and b denotes the number of variables every constraint can be defined on them, RSS is a ( d, 2 )-
CSP ( see [10] ).
2.3. Other Reductions
In this chapter some reductions from other NP-Complete problems to RSS will purposes.
2.3.1. Graph Coloring is a special case of RSS
Graph coloring is a NP-Hard problem. Question is finding minimum number of colors we can
use to coloring an arbitrary graph with vertexes. We can break this problem to smaller decision
problem that question is that for every : can we color the graph by colors? If these sub
problems be solvable in polynomial time then prime problem is polynomial solvable. Consider a
Decision Graph Coloring problem with vertexes. Question is coloring a graph with vertexes as no
adjacent vertexes have same colors and we can use maximum colors. It is immediately a RSS
instance. As we can call vertexes: rules and call colors: states. Two states in two rules have conflict if
they denote same colors in two corresponding vertexes, establishing the reduction.
2.3.2. Reducing 3-SAT to 3-RSS
1- Please consider 𝑀 rules for RSS instance corresponding to 𝑀 Boolean functions.
2- In each rule: please write all states of its corresponding function ( A function with x
Boolean variable have 𝑥
different states ) .
3- Please cross out the states that they are not true in the definition of their functions ( For
example in x 𝑦 , the cases 01 and 10 are not true in the definition of the function ) .
4- Please connect states in different Rules if they have not conflict based on shared Boolean
variables.
pg. 6
Fig ? Reducing from 3-SAT to RSS ( Lines show conflicts )
There exists a famous reduction from 3-SAT to Maximum Clique instance ( see [5] ) but a
special case of maximum clique as all nodes of graph stand in separate Independent Sets of size 3.
Tues if we consider these independent sets are rules with 3 states we have a RSS instance. Two nodes
are connected by an edge if (1) they correspond to literals in the same clause, or (2) they correspond
to a variable and its inverse for example this Formula is transformed into the graph of (Fig. ?).
) ̅ ̅) ̅ ) ̅ ̅)
2.3.3. Maximum Clique is not immediately a RSS
Consider a social network on an internetwork. Some people are in friends list of each other we can
model this in a friendship graph like . Question is what is maximum clique of people they all know
each other. This is an example of Maximum Clique problem. Every RSS problem can immediately be
a Maximum Clique problem by omitting and eliminating rules. If RSS has rules then result of
Maximum Clique cannot be bigger than because it must select every node from every independent
sets ( rules ). And result of Maximum Clique is result of RSS but reverse of it is not true means: a
RSS problem in general cannot be immediately a RSS problem. A counterexample for this matter is a
pentagonal as maximum clique of a pentagonal is 2 but it cannot configure in justly 2 rules ( It can
configure in at least 3 different rules where don’t show a RSS solution ). Note that rules must be
separate Independent Sets. Moreover if such a configuration exists, how we must find it? What is
maximum size of Maximum Clique?
2.3.4. Reducing Maximum Clique to RSS
We explained a Maximum Clique is not immediately a RSS. But in this session we review a
method for transforming a Maximum Clique instance to a RSS instance. Let be a Maximum Clique
instance with nodes . We can consider rules corresponding to nodes:
every rule has two states on state and off state. As for each two nodes that are disconnected from each
other we consider a conflict between on-on states of corresponding rules. Such a structure Satisfies
the condition for being a Clique. As two rules that their corresponding nodes are not connected,
cannot be in on state both together means: the RSS configuration Satisfies if and only if on rules
denote a Clique in the graph. But this structure is not enough for being Maximum Clique. It covers all
Cliques of problem but even a single node can be a Clique and it Satisfies this configuration but is not
a Maximum Clique.
Tues we must attach an structure to this structure that this new structure Satisfies if and only
if structure are showing a Clique of size . Then we can find the Maximum Possible Clique by step by
step test on size of Cliques from 1 to . We must design a structure for computing summation of rules
that are in on state. We design this structure by considering rules , … where is
number of nodes of the Graph. consists of states: . consists of states:
. And every consists of states )
pg. 7
) ) ) . We expect shows the number of on states. Then in each stage
of test of a -Clique we can fix state of and test for solving RSS instance.
Now we must design the relation between these rules. For we must configure the
relation between and and . But and are Boolean thus they can configure their
states denoting 4 cases ( I use the word case for separating it from word states on rules ). Summation
of two of them is 1 ( on-off and off-on ) and one of them 2 ( on-on) and one of them 0 ( off-off ). We
connect 0 state in to off states of and . We connect 2 state in to on states of and
. We connect 1a and 1b states in arbitrary one to on of and off of and other to vice
versa. Now we characterized relation between and and . Now is summation of
and .
For every we must find its relation with and a new rule as be
summation of and . consists of states ) )
) ) . But for example 1a and 1b are similar for thus we can say
consists of states ) ) means it have states and consists of states
on and off. Whereas is summation of with a new Boolean rule, range of is
from 0 to . One edition for 0, One edition for and two edition for others. We connect 0
state in to off states of and 0 state of . We connect state in to on
state of and state of . For other states of for example for state there is two
edition we connect arbitrary one of them to off state of and state of and other
to on state of and state of . Now is summation of first 's. with
following this process we can configure be summation of all 's denoting size of maximum
clique then by fixing state of in we can test problem have a Maximum Clique of size or
not.
There is some other kind of creating such an attachment structure for Maximum Clique. For
example we can compute summations with a binary tree. But here let us compare number of generated
states for RSS configuration with number of nodes of prime Maximum Clique with nodes.
Obviously number of 's is and thus number of states of 's is . It is obvious that every
has states. Tues number of generated states is:
∑ ∑ ) )
When we dealing with a large graph it is almost equal to If necessary time for
solving a RSS instance with entirely states ideally be then necessary time for computing a test
for k-clique is and whereas we must do testes for finding a Maximum Clique. Necessary time
for solving a Maximum Clique is But we can use binary search instead of testing all sizes of
cliques in such a case complexity is equal to . It means if then Maximum
Clique, Independent Set and Vertex Cover are in NP-Complete class and are in P class!!!
2.3.5. Other problems
It is a known principle that all problems in NP can reduce to 3-SAT and SAT problem and we
explain how SAT and 3-SAT can reduce to RSS. Independent Set and Vertex Cover that are not NP-
Complete and are NP-Hard can be immediately a Maximum Clique instance ( See [5] ). 3-SAT can
reduce to 3-Colorability that is immediately a RSS instance ( See [5] ). For reduction from
Hamiltonian Circuit to SAT consists of OnlyOne and NAND function see [11] ( I don’t accept the
main goal of paper ). N-Queens problem is immediately simply a RSS instance if every column be a
Rule.
2.3.6. Reducing RSS to 3-RSS
pg. 8
Fig. ? Expanding a 4 state rule to two 3 state rule
A k-RSS instance is a RSS instance with at most k states per each rule. We can reduce a k-RSS
instance to a 3-RSS instance where every rule has at most 3 states. Consider a rule with m states. We
can divide these states to two sets a and b where:
| | | | | | | |
Then we can replace old rule with 2 new rules that first consists of (a) and a new extra state and
second consists of (b) and a new extra state. Extra states have conflict together and states of (a) and
states of ( ) have conflict together too ( Fig. ?) . Tues system must select only one of states of ( ) or
states of ( ) means mechanism works. We continue this division to access to the case that all rules
have 3 states. Also based on following theorem this reduction is polynomial relative to parameters of
first problem.
Theorem 2.3.6.1. For every arbitrary division process ( selecting states to be in or sets arbitrary ),
for a rule consists of m states, number of generated objects ( new rules ) is – .
Proof: Proof is by strong induction. Let prediction ) be true if and only if for every arbitrary
division process ( selecting states to be in or sets arbitrary ), for a rule consists of m states,
number of generated objects ( new rules ) is – .
Base Case: ) is true because it is itself an allowable rule and is one single and .
Inductive Step : Assume that prediction is true for all . We may divide m states to sets
and observing formula 2.3.6.1. Then we know first rule have | | states and second has
| | states but based on 2.3.6.1. | | and | | . Tues | | ) and
| | ) are true. Tues they must generate new objects with number respectively | | and
| | . And summation of them is:
| | | | | | | |
Proving the theorem.
When we are involving with a large instance with rules and states per each rule, after reduction
we have ) variables and states. Let denotes size of prime problem and denotes size
of second problem we have ( It is also a higher bound for it ):
) )
2.4. Producing Random RSS Instances
In introduction session we mentioned researching regarding a decision problem when we don’t
know it contains a valid result or not, is not a true approach to settling question because we
don’t know if algorithm will fail, algorithm is incorrect or problem have any result.
This session explain the algorithm is able to construct a random RSS instance containing at least
one result. For conducting this for a RSS instance consists of rules each rule consists of justly
states, we first assume that there exist a result like We can show this result by sequence
when ) ) and we can select every randomly as Satisfy this formula.
It is obvious that for every couple , state of rule and state of rule are connected
to each other means they have not conflict. For other couples of states in problem they are connected
to each other with probability that is called density of connection. We can do it by this algorithm:
Algorithm 2.4.1. Consider a RSS problem with rules and states per each rule.
2.3.6.1
pg. 9
It is clear that this algorithm guarantees that problem space consists of at least one result but
doesn’t guarantee that it hasn’t more than one result ( other stochastic results ). It is clear that we can
consider more than one result in the problem space and do this algorithm ( Author found out in
practice that they produces worst cases ).
2.5. Worst Cases
In this partition of paper we introduces some worst cases that are a good shibboleth for testing
correctness of algorithms many simple algorithms and many complex algorithms that seemed to be
correct turned out to be false by these worst cases. All algorithms that will proposed in this paper can
solve these worst cases very fast.
2.5.1. Worst Case 1
This worst case is a pattern that can stand in the problem structure or a problem can consists of
several of these patterns. We show dimensions of this pattern by respectively size of pattern
based on number of states × number of rules. Tues this pattern is designed in consequent rules.
First we divide them to ⁄ consequent pair of rules. Note that in the last session we assumed there
exist a result like and thus every rule have a state like that belongs to the assumed global result.
For each pair consists of rules and we do this works: and have two states that belongs to
global states, we call them and thus we select other states for each of them uniformly at
random and call them belonging to and belonging to . Then for
each we assume a conflict between ) and ) and ).
Also we connect every and to whole the states belonging global results that this pattern covers
whole them ( other relations are arbitrary connected or disconnected based on definitions of previous
session ). This pattern can cover whole the system where or can repeat in system or
be a part of system. This worst case fails many simple algorithms that worked good for average case.
2.5.2. Worst Case 2
This worst case is simply a problem with multi assumed results. When we assume result
randomly in problem structure and connect other states randomly the produced problem have at least
results. Experimental results show that a problem with and arbitrary with assumed
results and connection probability is a very powerful worst case.
2.5.3. Worst Case 3
This worst case is similar to Worst Case 2 but instead of assuming perfect results we assume
several abort results. In each abort result we assume to rules and . We assume a conflict between
state of that stands in and state of that stands in and connect whole the other states of .
These abort results will deceive the algorithms.
2.5.4. Worst Case 4
This worst case is the hardest known worst case for proposed algorithms. Although solving it, is
simple for some algorithms but solving this worst case is hard for Classical Heuristics, Packed
Heuristics and Packed Exact Algorithms. For creating this worst case we first select two rules
uniformly at random. For each relation between states and where is a part of global result and
is not: if and are both in and they will be connected and if they are not, they have conflict. For
each relation between and where are not a part of global result: if and are both in and
they will have conflict and if they are not, they will be connected.
Theorem 2.5.4.1. It is obvious that the only result in such structure is .
1- For every 𝑥 𝑛 select a value for 𝐺 𝑥 randomly between to 𝑚
2- Connect all of them to each other.
3- Connect all other states with probability 𝑃 or Disconnect them with probability 𝑃
4- Forget what was the G
pg. 10
Proof. For whole the system except and we only have 2 choice. We can select whole states in
rules from or we can select whole them from out of because of every state from and from out
of have conflict.
Analysis by Cases. There are two cases:
1. If we choose whole states of rules except and from then: For and we have only one
selection: We must choose them from because of states out of in and have conflict
with states in other rules. Thus is a result.
2. If we choose whole states of rules except and from out of then: They have conflict with
states from in and but also whole states out of in and have conflict to each other
thus there is any result in such a case.
3. Packed Computation Idea
The term packed computation means conducting computational steps in packed data. In a classic
algorithm every variable ( here a rule ) can select only one of its states but in a packed computation
algorithm every variable ( here a rule ) can select more than one state as its current state. In a classic
algorithm when we assemble all variables we say we have a candidate result or a global state but in a
packed processing algorithm when we assemble all variables we will have a set of candidate results
or global states. We call this situation a Packed Global State. For example if
candidate results are . We say a Packed Global State is a Packed Result if
every Global State belonging its Global States set be a correct Result. In a RSS problem if in a Packed
Global State, No active states have conflict, it is a Packed Result as it is a set of many correct results.
Definition 3.1. A variable ( here a rule ) is in a packed state if where its valid states be α, β, γ, δ, ... , It
be in a combination of them like αβ, αγ, αβγ, ... . when a variable is in one state only it is deterministic
but when it is in packed state it is none-deterministic.
Definition 3.2. A packed global state or packed candidate result is combination of variables they are
in a single state or are in packed state in such a situation the candidates that system is visiting is
obtained by cross multiplication in the state sets of all variables.
Definition 3.3. A packed global state is a packed correct result if whole of its sub-candidates be so
correct results.
3.1. PSP family ( Exact Algorithms )
In this session we will introduce 3 exact algorithm based on packed computation.
Definition 3.1.1. A process belongs to Packed State Process class if it be based on packed
computation as variables can visit more than one of their states along the computation and be an exact
algorithm.
3.1.1. Basic Actions
Before introducing exact algorithms we must first introduce their basic actions. These basic actions
are common in whole of them and also I guess there are many other algorithms that work by these
basic actions and perhaps many of them are all correct.
Action 3.1.1.1. GeneralReset : After this action whole of rules stand in a packed state where they
select whole of their valid states. For example if in a 3-RSS every rule contains states α, β, γ, then
after GeneralReset whole of them stand in αβγ state. In such situation system contains whole of
possible candidates.
Action 3.1.1.2. LocalReset ( x ) : After this action rule x, stands in a packed state where it select
whole of its valid states. This situation is independent from what this rule was before doing this
action.
Definition 3.1.1.1. Priority of a state in a rule depends on the condition rule is in. When a rule is in a
packed state means it select more than one state priority of its states is 2. When a rule is in a single
pg. 11
state priority of its only one state is equal to 1. It is obvious that when a state is off in a rule means it
don’t select this state priority of this state is 0.
Action 3.1.1.3. LocalChangeDo ( I, J, s ) : In this action system checks if there exist one or more
states like t in rule J that they are On and they have conflict with state s of rule I then it turns off state
s of rule I if and only if priority of state t be lower or equal to priority of state s. Means a state with
priority 2 cannot cancel a state with priority 1.
Action 3.1.1.4. LocalChangeReset ( K ) : In this action system checks if rule K contains any On
state means this rule is empty, It will reset the rule. Resting a rule means it stands in whole of its valid
states like LocalReset ( K ) .
3.1.2. PSP-Alpha
PSP-Alpha algorithm is a sequence of basic actions proposed in prior session that led to a packed
result and is an exact algorithm. Complexity of this algorithm is for a RSS instance with
rules, each rule containing states. For a 3-RSS instance when is number of rules and for a 3-SAT
instance reduced to 3-RSS when is number of clauses complexity is This complexity is the
complexity of worse cases. However algorithm usually converges faster than this. Let divide body of
algorithm to 2 partition Tier and Scan. Scan reviews problem completely and use Basic Actions. This
review has some parameters. Tier call different scans and give them some parameters these
parameters will change along different scans ( Fig. ? ).
Fig. ? Different tiers of algorithm
Algorithm 3.1.2.1. Scan Consider a RSS instance with rules labeled from each rule
containing states labeled from . Every LocalChangeDo effects on a point of problem on
state s of rule I we call it Destination Address of action and this effects on it based on a rule like J we
call it Source Address of action. are input parameters of a scan that they changes
along different scans. It is obvious that are some shifts on the visiting source rules,
visiting destination rules and visiting destination states of destination rules and also is a parameter in
this process that its range is between when c is 1 algorithm counts J like 0, 1, 2, … when c
is 2 algorithm counts J like 0, 2, 4, … , 1, 3, 5, … when c is 3 algorithm counts J like 0, 3, 6, … 1, 4,
7, … 2, 5, 8, … and so on. Parameter is also new and do a shift on state visiting but based on
number of destination rule ( I ).
pg. 12
Algorithm 3.1.2.2. PSP-Alpha ( Tier ) :
The outer loop that is a repeat is only a simple repeat but other loop produce different parameters for
scans
3.1.3. PSP-Beta
This algorithm is similar to PSP-Alpha only the parameters that will change in scans are different:
Worse case complexity of this algorithm is that for a 3-RSS or a reduced 3-SAT is
Algorithm 3.1.3.1. Scan. In this process are input parameters of a scan
that they changes along different scans. Parameters ) ) are new
and produce more permutation on state visiting based on number of source rule ( J ) and inversing site
of motion ( when we have 3 state per each rule ).
1- Count 𝑖 from to 𝑛 do{
2- 𝐼 𝑖 𝑎 ) 𝑚𝑜𝑑 𝑛
3- 𝐿𝑜𝑐𝑎𝑙𝑅𝑒𝑠𝑒𝑡 𝐼 )
4- 𝑗𝑗 𝑐 𝑎𝑛𝑑 𝑗𝑠
5- Count 𝑗 from to 𝑛 do {
6- 𝑗𝑗 𝑐
7- If 𝑗𝑗 > 𝑛 then 𝑗𝑠 𝑎𝑛𝑑 𝑗𝑗 𝑗𝑠
8- 𝐽 𝑗𝑗 𝑏 ) 𝑚𝑜𝑑 𝑛
9- If 𝐼 𝐽 do {
10- Count s from 0 to 𝑚 do {
11- 𝑆 𝑠 𝑠 𝑠 𝐼 ) 𝑚𝑜𝑑 𝑚
12- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝐷𝑜 𝐼 𝐽 𝑆 )
13- }
14- }
15- }
16- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝐼 )
17- }
1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡
2- Repeat these commands n time
3- Count 𝑎 from to 𝑛 and do{
4- Count 𝑏 from to 𝑛 and do {
5- Count 𝑐 from to 𝑛 and do {
6- Count 𝑠 from to 𝑚 and do {
7- Count 𝑠 from to 𝑚 and do {
8- 𝑆𝑐𝑎𝑛 𝑎 𝑏 𝑐 𝑠 𝑠 )
9- Check if it is result
10- }
11- }
12- }
13- }
14- }
pg. 13
Algorithm 3.1.3.2. PSP-Bata ( Tier ) :
3.2. PSSP family (Randomized Algorithms)
In prior sessions some exact algorithms proposed. They work very fast in practice specially for
average cases but worse case complexity of them is great. In this session we review two randomized
algorithms they work slow but complexity of them is small: PSSP-I, PSSP-II. Basic Actions of prior
algorithms was Conflict-Base as they eliminate every conflict when they visit them in a regular
rhythm but PSSP-I is a Connection-Base algorithms. They expand packed candidate based on new
states that they are connected to current states. Since we expect whole of configuration is connected (
If not we divide problem to 2 new sub problems and solve it ), such a process drive system to initial
state very fast ( the case all states in all rules be On ). Tues as a restrictor rule sometimes system
1- Count 𝑖 from to 𝑛 do{
2- 𝐼 𝑖 𝑎 ) 𝑚𝑜𝑑 𝑛
3- 𝐿𝑜𝑐𝑎𝑙𝑅𝑒𝑠𝑒𝑡 𝐼 )
4- 𝑗𝑗 𝑐 𝑎𝑛𝑑 𝑗𝑠
5- Count 𝑗 from to 𝑛 do {
6- 𝑗𝑗 𝑐
7- If 𝑗𝑗 > 𝑛 then 𝑗𝑠 𝑎𝑛𝑑 𝑗𝑗 𝑗𝑠
8- 𝐽 𝑗𝑗 𝑏 ) 𝑚𝑜𝑑 𝑛
9- If 𝐼 𝐽 do {
10- Count s from 0 to 𝑚 do {
11- 𝑆 𝑠 𝑠 𝑠 𝐼 𝑠 𝑠 𝑠 𝐽 ) 𝑚𝑜𝑑 𝑚
12- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝐷𝑜 𝐼 𝐽 𝑆 )
13- }
14- }
15- }
16- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝐼 )
17- }
1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡
2- Count 𝑎 from to 𝑛 and do {
3- Count 𝑏 from to 𝑛 and do {
4- Count 𝑐 from to 𝑛 and do {
5- Count 𝑠 from to 𝑚 and do {
6- Count 𝑠 from to 𝑚 and do {
7- Count 𝑠 from to 𝑚 and do {
8- Count 𝑠 from to 𝑚 and do {
9- 𝑆𝑐𝑎𝑛 𝑏 𝑐 𝑠 𝑠 𝑠 𝑠 )
10- Check if it is result
11- }
12- }
13- }
14- }
15- }
16- }
pg. 14
selects one single of On states uniformly at random to dominating this expansion as an Anti-Thesis.
PSSP-II is similar to prior algorithms but is randomized.
3.2.1. Basic Actions
Let us define some new Basic Actions. New algorithms use these new basic actions or priors.
Action 3.2.1.1. ConnLocalSearch ( X ) In this action system checks all On states of X: If state s of X
is On but it is not connect to at least one On state in every rule then system turns off s. In other word
for state s of rule X, if there exist one rule like Y that no On state of Y be connected to s, system turns
off s in X.
Action 3.2.1.2. RandomSelect ( X ) After this action rule X select only one of states that was active
before executing this action uniformly at random. For example if X is αβ after this action it will
become α or β uniformly at random.
3.2.2. PSSP-I An Asynchronous PSSP (a heuristic local search)
PSSP-I is a randomized algorithm based on Packed Processing. It has 4 deferent editions.
Complexity of this algorithm is that for a 3-RSS or 3-SAT is
Algorithm 3.2.2.1. PSSP-I
Following box proposes whole 4 editions of PSSP-I algorithm:
In designing this algorithm selecting line 5 ( command 1 ) or 7 ( command 2 ) and selecting line 11 (
command 1 ) or 13 ( command 2 ) is arbitrary thus this algorithm have 4 different editions
respectively PSSP-I-11, PSSP-I-12, PSSP-I-21, PSSP-I-22.
3.2.3. PSSP-II A Censored Asynchronous PSSP
This algorithm is similar to exact algorithms proposed in session 3.1. Theoretical complexity of this
algorithm is that for a 3-RSS or 3-SAT is
Algorithm 3.2.4.1. PSSP-II
1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡
2- Count 𝑐 from to 7
and do {
3- Count 𝑥 from 𝑡𝑜 𝑛 and do {
4- Do one of these commands arbitrary
5- 𝑅 𝑥 ( command 1 )
6- Or
7- 𝑅 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑛 ( command 2 )
8- 𝐶𝑜𝑛𝑛𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒 𝑅 )
9- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝑅 )
10- Do one of these commands arbitrary
11- 𝐸 𝑇𝑟𝑢𝑒 𝑖𝑓𝑓 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 ( command 1 )
12- Or
13- 𝐸 𝑇𝑟𝑢𝑒 𝑖𝑓𝑓 𝑐 𝑚𝑜𝑑 ( command 2 )
14- If 𝐸 𝑇𝑟𝑢𝑒 𝑡 𝑒𝑛
15- 𝑅𝑎𝑛𝑑𝑜𝑚𝑆𝑒𝑙𝑒𝑐𝑡 𝑅 )
16- }
17- Check if it is a result
18- }
pg. 15
4. Experimental Results and Complexity Diagrams
In this session we review complexity of algorithms in practice based on executed instructions. We
only count instructions of basic actions of innermost loops thus we only count complexity of
LocalChangeDo for PSP-Alpha, PSP-Beta and PSSP-II and ConnLocalChange for PSSP-I. But every
LocalChageDo contains instructions because it must check all states of a rule have conflict with a
single state of a rule or not and every ConnLocalChange contains instructions because it must
check all states of all rules with all states of a rule.
These experiments are based on exponents of like
Beauty of this approach is that it contains both small instances and large instances. Positive tests in
small instances show that correctness property of algorithms is not an asymptotic behavior. Positive
tests in large instances show that how much algorithms are powerful. These experiment give us a
sequence of number of instructions ( average case or maximum case ) . For two consecutives
of them and , if size of prime be size of second is . Let us assume that complexity of
system is a polynomial in form . We have:
{
Solving this system obtain:
( ) ( )
Thus we obtain a sequence of exponents . We can find estimated exponent by this formula:
̅ ( ) ∑
It give us estimated complexity we can show it like ̅
. But we can compute a Deviation for it.
Deviation show how much practical exponents deviate from this estimation. We have:
̅) ( ) ∑ | ̅ |)
PSP-Alpha m = 3 Density = 0.5 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 28.26 72 100% 128 1523024.64 7461504 100%
1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡
2- Count 𝑐 from to 𝑛 𝑚 and do {
3- 𝑥 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑛
4- 𝑦 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑛
5- If 𝑥 𝑦 then do 𝑚 time
6- {
7- 𝑠 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑚
8- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝐷𝑜 𝑥 𝑦 𝑠)
9- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝑥 )
10- }
11- Check if it is a result
12- }
4.1
4.2
4.3
4.4
pg. 16
4 292.68 1512 100% 256 6163084.8 22325760 100%
8 4964.4 21168 100% 512 22345989.12 98896896 100%
16 16200 108000 100% 1024 86737305.6 358262784 100%
32 50353.92 366048 100% 2048 347496099.84 1924245504 100%
64 339655.68 1342656 100% 4096 1458255052.8 6944071680 100%
AVG CPX = AVG DEV = 0.797 WRS CPX = WRS DEV = 0.847
( Table. 1 )
PSP-Alpha m = 3 Density = 0.8 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 18 18 100% 128 3735141.12 15215616 100%
4 108 108 100% 256 12990067.2 35251200 100%
8 1789.2 11592 100% 512 52533089.28 209567232 100%
16 39333.6 354240 100% 1024 222217205.76 971080704 100%
32 170256.96 830304 100% 2048 957217812.48 3622109184 100%
64 902845.44 3302208 100% 4096 3266732851.2 12982394880 100%
AVG CPX = AVG DEV = 0.883 WRS CPX = 7
WRS DEV = 1.393
( Table. 2 )
PSP-Beta m = 3 Density = 0.5 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 23.22 54 100% 128 3609319.68 25164288 100%
4 291.6 432 100% 256 15081638.4 98115840 100%
8 6325.2 59472 100% 512 54346199.04 390878208 100%
16 22140 362880 100% 1024 238056192 1367055360 100%
32 123652.8 1946304 100% 2048 813088051.2 5621815296 100%
64 751161.6 6277824 100% 4096 4321929830.4 26417664000 100%
AVG CPX = 7
AVG DEV = 0.808 WRS CPX = WRS DEV = 1.121
( Table. 3 )
PSP-Beta m = 3 Density = 0.8 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 18 18 100% 128 10134478.08 48280320 100%
4 117.72 216 100% 256 51525504 384238080 100%
8 3144.96 196560 100% 512 210532654.08 920683008 100%
16 102621.6 935280 100% 1024 870861404.16 3582627840 100%
32 482558.4 5383584 100% 2048 2727900979.2 26562134016 100%
64 2273080.32 8237376 100% 4096 12333275136 51174789120 100%
pg. 17
AVG CPX = AVG DEV = 1.055 WRS CPX = WRS DEV = 1.691
( Table. 4 )
PSSP-I-11 m = 3 Density = 0.5 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 36 36 100% 128 147456 147456 100%
4 220.32 1440 100% 256 589824 589824 100%
8 1388.16 3456 100% 512 2359296 2359296 100%
16 3133.44 6912 100% 1024 9437184 9437184 100%
32 11059.2 27648 100% 2048 37748736 37748736 100%
64 36864 36864 100% 4096 150994944 150994944 100%
AVG CPX = AVG DEV = 0.412 WRS CPX = WRS DEV = 0.875
( Table. 5 )
PSSP-I-11 m = 3 Density = 0.8 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 36 36 100% 128 1940520.96 15925248 100%
4 144 144 100% 256 2058485.76 14745600 100%
8 2004.48 5184 100% 512 3279421.44 14155776 100%
16 102850.56 294912 100% 1024 9437184 9437184 100%
32 332144.64 1179648 100% 2048 37748736 37748736 100%
64 1503313.92 4718592 100% 4096 150994944 150994944 100%
AVG CPX = AVG DEV = 1.210 WRS CPX = WRS DEV = 1.454
( Table. 6 )
PSSP-II m = 3 Density = 0.5 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 85.59 135 100% 128 151661.97 281448 100%
4 1267.11 3456 100% 256 343351.71 633285 100%
8 8318.97 23382 100% 512 753932.61 1227906 100%
16 12842.55 23571 100% 1024 1608875.46 2623779 100%
32 27247.05 62154 100% 2048 3627271.26 5305608 100%
64 59828.76 151443 100% 4096 7491891.42 11388519 100%
AVG CPX = AVG DEV = 0.793 WRS CPX = WRS DEV = 0.946
( Table. 7 )
PSSP-II m = 3 Density = 0.8 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 40.5 54 100% 128 755826.93 2334366 100%
pg. 18
4 411.48 729 100% 256 1585949.49 3375405 100%
8 7677.18 29619 100% 512 3623290.92 8070651 100%
16 62231.76 211626 100% 1024 7667312.58 11737359 100%
32 152787.06 611685 100% 2048 17269367.94 24926724 100%
64 293816.16 667008 100% 4096 36573756.66 63574875 100%
AVG CPX = 7
AVG DEV = 1.107 WRS CPX = WRS DEV = 1.336
( Table. 8 )
PSSP-II m = 3 Worst Case 4 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 32.67 54 100% 64 43315.83 2103165 100%
4 575.37 1944 100% 128 50681.97 82053 100%
8 3301.29 15336 100% 256 110779.92 186003 100%
16 6849.9 25758 100% 512 229022.37 303480 100%
32 10408.5 82161 100% 1024 480603.51 607878 100%
AVG CPX = AVG DEV = 1.082 WRS CPX = WRS DEV = 2.059
( Table. 9 )
PSSP-II m = 3 Worst Case 2 Number of tests = 100
n AVG WRS Success n AVG WRS Success
2 42.66 81 100% 16 22631.67 180630 100%
4 285.12 432 100% 32 1916037.72 51227991 100%
8 3262.68 6237 100% 64 126764167.59 1950143904 100%
( Table. 10 )
Tables 1 to 8 show complexity of algorithms in practice for average case ( AVG ) and worse case (
WRS ) happened in practice and Tables 9 and 10 show experimental results for worst case 2 and 4 for
PSSP-II algorithm however experimental results show that whole algorithms can solve whole worst
cases but in bigger polynomial complexity. Density is density of connections. Number of tests per
each size is = 100. AVG CPX is estimated complexity based on average case data. AVG DEV is
deviation of its exponent. WRS CPX is estimated complexity based on worse case data. WRS DEV is
deviation of its exponent. These tables don’t cover all subversions of algorithm ( only one of them ).
Researcher found out whole algorithm work correct with different permutations for tire loops. Thus
each of them have many versions. More tests may be better. Note that an instance need for
bytes of memory for connectivity graph. For example for and , essential
memory is 2.4 Giga Bytes! I am sorry that it cannot take place in used 64-bit language programming (
In theory a todays 64-bit system can address a memory of size equal to 16,000 petabytes means
16,000,000 terabytes and target computer hard disk is 1 terabyte ). (Table. 11) show a conclusion
about all algorithms.
Based on experimental results many complexities are around . It is also the faster time that a
process can verify correctness of a solution. Note that an efficient algorithm at least must sometimes
check produced result is a correct solution or not thus:
Definition 4.1. There is not an efficient algorithm that can solves a problem in a complexity smaller
than the complexity that is necessary to verifying a solution for the problem and this time for 3-RSS
instances is
pg. 19
We can analyze a 3-SAT instance based on number of variables or number of clauses . Each
clause cancels one state for 3 variables. Number of whole possible clauses for a 3-SAT is equal to
) ) If density of clauses be ⁄ then expected number of
clauses is ) ) ) If we use such an approach to randomized 3-RSS used
in this research number of whole possible conflicts is ) that is number
of used states for the assumed result and is ⁄ ) Thus:
) . if density of conflicts be ⁄ then
) > √
)
It holds that if then ) and also if complexity of a problem based on be
this complexity based on is . Thus for example if complexity of PSSP-I-11 in worst case in
⁄ is this complexity based on is that is linear.
Complexity of PSSP-II in practice for worst case is . But for Worst Case 4 is bigger than .
Complexity of this algorithm is less than means complexity is less than complexity of visiting
whole problem data once or verifying it. Note that we didn’t count instruction of verification checks.
However it can happen in practice but is not a theoretical complexity.
Algorithm Type
Theoretical
Complexity
for m-RSS
Theoretical
Complexity
for 3-RSS
Average
Practical
Complexity
Worst
Practical
Complexity
Succ
ess
Perfor
mance
PSP-Alpha Exact 7 100% slow
PSP-Beta Exact 7 100% slow
PSSP-I-1 Random 100% fast
PSSP-II Random 100% fast
Table. 13
Let us assume that the practical complexities that we estimated them be exponential. Tues must be a
constant α that complexity to be . Tues based on 4.2 we have:
( ) (
[ ]
[ ]
) ( [ ]
)
Based on 4.3 we have:
̅ ( ) ∑ ( [ ]
) ( ) )
Based on 4.4 we have:
̅) ( ) ∑(| ̅ ( [ ]
)|)
If we solve:
̅ ( [ ]
) > >
Thus:
pg. 20
̅) ( ) ) ( ) ) ( ) )
And:
√ ( )
Contrary is that if we assume then deviation must be at least greater than
̅) that is a large number and we cannot compare it with experimental
results. In the other hand worst case deviation of PSSP-II is almost 2 then if it be an
exponential, α must be at most smaller than √ that is too
small to being a root for an exponential and if it be thus is a very good exponential
complexity with a very small root.
5. What about all results
In some practical uses probably we are interest in obtaining all results of problem or driving system
to an arbitrary result. Algorithms as far as producing result are a Yes and No test for a NP-Complete
problem thus we can fix variables on different states and test is it satisfiable or not? and with a step by
step testing we can find all results of
Let assume that complexity of doing a test be Assume there exists a result like . The
worst case is that for each rule algorithm start from first state but result be in last state thus we must
do test for all states of all rules step by step. Thus a higher bound for finding a result is .
If we have g deferent results a higher bound for finding whole of them is . That is a
polynomial time.
6. What about problems based on profit
Consider a problem based on profit like TSP or Knapsack where is size of problem and is
number of digits that every weighted property have at most. If we test problem have a result with a
profit at least , this problem is NP and thus can map to a RSS instance. Let complexity of solving
this problem be . But maximum profit in such a system cannot overstep of summation of all
weighted objects that is . But based on well-known fact about Boolean algebra number of
digits of is . Thus if we do a binary test we can find the result in complexity at most
) . That is a polynomial time.
7. Conclusion
This paper contains a new idea in processing hard problems: Packed Computation. This scope is
now virgin and people can create many new ideas in scope. Importance of packed computation is that
it can cop NP-Completes. Researcher believes that human must find methods for solving all problems
very fast. However the worst case complexity of these algorithms is an open problem.
8. Acknowledgements
I thank Dr. Mohammad Reza Sarshar and Dr. Shahin Seyed Mesgary in university of Karaj and
Sina Mayahi and Professor Albert R. Meyer university of MIT and others who helped me conducting
these researches.
9. References and More Study
1- The Complexity of Theorem-Proving Procedures, Stephen A. Cook, University of Toronto 1971
2- Reducibility among Combinatorial Problems, Richard M. Karp, University of California at Berkley
1972
3- Probability and Computing Randomized Algorithms and Probabilistic Analysis, Michael
Mitzenmacher and Eli Upfal, Cambridge
4- Mathematics for Computer Science, Eric Lehman and F Thomson Leighton and Albert R Meyer
pg. 21
5- 21 NP-Hard problems, Jeff Erikson, 2009
6- Introduction to Probability Models Sixth Edition, Sheldon M. Ross, ACADEMIC PRESS San
Diego London Boston New York Sydney Tokyo Toronto
7- Exact algorithms for NP-hard problems: A survey, Gerhard J. Woeginger, Department of
Mathematics university of Twente, P.O. Box 217
8- Higher-order multivariate Markov chains and their applications, Wai-Ki Ching , Michael K. Ng,
Eric S. Fung, Elsevier, Linear Algebra and its Applications
9- Exact Algorithms for Hard Graph Problems (Algoritmi ESATti per Problemi Di_cili su Gra_)
Fabrizio Grandoni, Universit_a degli Studi di Roma Tor Vergata"
10- Improved Algorithms for 3-Coloring, 3-Edge-Coloring, and Constraint Satisfaction, David
Eppstein
11- Polynomial Solvability of NP-Complete Problems, Anatoly Panyukov ( Note that proof is not
acceptable )

Más contenido relacionado

La actualidad más candente

Comparing human solving time with SAT-solving for Sudoku problems
Comparing human solving time with SAT-solving for Sudoku problemsComparing human solving time with SAT-solving for Sudoku problems
Comparing human solving time with SAT-solving for Sudoku problemsTimdeBoer16
 
DESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLAB
DESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLABDESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLAB
DESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLABDr M Muruganandam Masilamani
 
Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning
Learning Collaborative Agents with Rule Guidance for Knowledge Graph ReasoningLearning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning
Learning Collaborative Agents with Rule Guidance for Knowledge Graph ReasoningDeren Lei
 
Dealing with inconsistency
Dealing with inconsistencyDealing with inconsistency
Dealing with inconsistencyRajat Sharma
 
PRML Chapter 11
PRML Chapter 11PRML Chapter 11
PRML Chapter 11Sunwoo Kim
 
ON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICE
ON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICEON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICE
ON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICEcscpconf
 
On average case analysis through statistical bounds linking theory to practice
On average case analysis through statistical bounds  linking theory to practiceOn average case analysis through statistical bounds  linking theory to practice
On average case analysis through statistical bounds linking theory to practicecsandit
 
An Application of Pattern matching for Motif Identification
An Application of Pattern matching for Motif IdentificationAn Application of Pattern matching for Motif Identification
An Application of Pattern matching for Motif IdentificationCSCJournals
 
Grammar Induction Through Machine Learning Part 1 ...
Grammar Induction Through Machine Learning Part 1 ...Grammar Induction Through Machine Learning Part 1 ...
Grammar Induction Through Machine Learning Part 1 ...butest
 
EFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATA
EFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATAEFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATA
EFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATAcsandit
 
Intelligent Control and Fuzzy Logic
Intelligent Control and Fuzzy LogicIntelligent Control and Fuzzy Logic
Intelligent Control and Fuzzy LogicPraneel Chand
 
PRML Chapter 12
PRML Chapter 12PRML Chapter 12
PRML Chapter 12Sunwoo Kim
 
Linear Type Theory Revisited (BACAT Feb 2014)
Linear Type Theory Revisited (BACAT Feb 2014)Linear Type Theory Revisited (BACAT Feb 2014)
Linear Type Theory Revisited (BACAT Feb 2014)Valeria de Paiva
 
A NOTE ON GÖDEL´S THEOREM
A NOTE ON GÖDEL´S THEOREMA NOTE ON GÖDEL´S THEOREM
A NOTE ON GÖDEL´S THEOREMijcsit
 
Poggi analytics - ebl - 1
Poggi   analytics - ebl - 1Poggi   analytics - ebl - 1
Poggi analytics - ebl - 1Gaston Liberman
 
PRML Chapter 6
PRML Chapter 6PRML Chapter 6
PRML Chapter 6Sunwoo Kim
 
Tips on Differentiation and Integration of Calculus Homework
Tips on Differentiation and Integration of Calculus HomeworkTips on Differentiation and Integration of Calculus Homework
Tips on Differentiation and Integration of Calculus HomeworkLesa Cote
 
ObservabilityForModernApplications-Oslo.pdf
ObservabilityForModernApplications-Oslo.pdfObservabilityForModernApplications-Oslo.pdf
ObservabilityForModernApplications-Oslo.pdfAmazon Web Services
 
PRML Chapter 5
PRML Chapter 5PRML Chapter 5
PRML Chapter 5Sunwoo Kim
 

La actualidad más candente (19)

Comparing human solving time with SAT-solving for Sudoku problems
Comparing human solving time with SAT-solving for Sudoku problemsComparing human solving time with SAT-solving for Sudoku problems
Comparing human solving time with SAT-solving for Sudoku problems
 
DESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLAB
DESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLABDESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLAB
DESIGN AND SIMULATION OF FUZZY LOGIC CONTROLLER USING MATLAB
 
Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning
Learning Collaborative Agents with Rule Guidance for Knowledge Graph ReasoningLearning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning
Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning
 
Dealing with inconsistency
Dealing with inconsistencyDealing with inconsistency
Dealing with inconsistency
 
PRML Chapter 11
PRML Chapter 11PRML Chapter 11
PRML Chapter 11
 
ON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICE
ON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICEON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICE
ON AVERAGE CASE ANALYSIS THROUGH STATISTICAL BOUNDS : LINKING THEORY TO PRACTICE
 
On average case analysis through statistical bounds linking theory to practice
On average case analysis through statistical bounds  linking theory to practiceOn average case analysis through statistical bounds  linking theory to practice
On average case analysis through statistical bounds linking theory to practice
 
An Application of Pattern matching for Motif Identification
An Application of Pattern matching for Motif IdentificationAn Application of Pattern matching for Motif Identification
An Application of Pattern matching for Motif Identification
 
Grammar Induction Through Machine Learning Part 1 ...
Grammar Induction Through Machine Learning Part 1 ...Grammar Induction Through Machine Learning Part 1 ...
Grammar Induction Through Machine Learning Part 1 ...
 
EFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATA
EFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATAEFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATA
EFFICIENTLY PROCESSING OF TOP-K TYPICALITY QUERY FOR STRUCTURED DATA
 
Intelligent Control and Fuzzy Logic
Intelligent Control and Fuzzy LogicIntelligent Control and Fuzzy Logic
Intelligent Control and Fuzzy Logic
 
PRML Chapter 12
PRML Chapter 12PRML Chapter 12
PRML Chapter 12
 
Linear Type Theory Revisited (BACAT Feb 2014)
Linear Type Theory Revisited (BACAT Feb 2014)Linear Type Theory Revisited (BACAT Feb 2014)
Linear Type Theory Revisited (BACAT Feb 2014)
 
A NOTE ON GÖDEL´S THEOREM
A NOTE ON GÖDEL´S THEOREMA NOTE ON GÖDEL´S THEOREM
A NOTE ON GÖDEL´S THEOREM
 
Poggi analytics - ebl - 1
Poggi   analytics - ebl - 1Poggi   analytics - ebl - 1
Poggi analytics - ebl - 1
 
PRML Chapter 6
PRML Chapter 6PRML Chapter 6
PRML Chapter 6
 
Tips on Differentiation and Integration of Calculus Homework
Tips on Differentiation and Integration of Calculus HomeworkTips on Differentiation and Integration of Calculus Homework
Tips on Differentiation and Integration of Calculus Homework
 
ObservabilityForModernApplications-Oslo.pdf
ObservabilityForModernApplications-Oslo.pdfObservabilityForModernApplications-Oslo.pdf
ObservabilityForModernApplications-Oslo.pdf
 
PRML Chapter 5
PRML Chapter 5PRML Chapter 5
PRML Chapter 5
 

Destacado (17)

CV Boosting Brochure_for_web
CV Boosting Brochure_for_webCV Boosting Brochure_for_web
CV Boosting Brochure_for_web
 
El narcotrafico
El narcotraficoEl narcotrafico
El narcotrafico
 
Partes de una computadora
Partes de una computadoraPartes de una computadora
Partes de una computadora
 
Luissa chavarria
Luissa chavarriaLuissa chavarria
Luissa chavarria
 
History Booklet FINAL
History Booklet FINALHistory Booklet FINAL
History Booklet FINAL
 
Jennifer Whyte ARVR Innovate Conference & Expo 2016
Jennifer Whyte ARVR Innovate Conference & Expo 2016Jennifer Whyte ARVR Innovate Conference & Expo 2016
Jennifer Whyte ARVR Innovate Conference & Expo 2016
 
Historia del automovil
Historia del automovilHistoria del automovil
Historia del automovil
 
Biodiversidad
BiodiversidadBiodiversidad
Biodiversidad
 
Parcial m learning.
Parcial m learning.Parcial m learning.
Parcial m learning.
 
Resume Mackey
Resume MackeyResume Mackey
Resume Mackey
 
Inspirational Immortal Legend Of The Contemporary World
Inspirational Immortal Legend Of The Contemporary WorldInspirational Immortal Legend Of The Contemporary World
Inspirational Immortal Legend Of The Contemporary World
 
sesion 5
sesion 5sesion 5
sesion 5
 
Obada Alsaqqa - FTDM - Masters Thesis - OSU 04.23.2015 - Upload
Obada Alsaqqa - FTDM - Masters Thesis - OSU 04.23.2015 - UploadObada Alsaqqa - FTDM - Masters Thesis - OSU 04.23.2015 - Upload
Obada Alsaqqa - FTDM - Masters Thesis - OSU 04.23.2015 - Upload
 
Ali Agha
Ali AghaAli Agha
Ali Agha
 
الگوریتم ژنتیک
الگوریتم ژنتیکالگوریتم ژنتیک
الگوریتم ژنتیک
 
Inzo Profile Small
Inzo Profile SmallInzo Profile Small
Inzo Profile Small
 
Algoritmos
AlgoritmosAlgoritmos
Algoritmos
 

Similar a OCT-20

GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
 
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMCOMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMijfcstjournal
 
Comparative study of different algorithms
Comparative study of different algorithmsComparative study of different algorithms
Comparative study of different algorithmsijfcstjournal
 
Chapter 3 introduction to algorithms handouts (with notes)
Chapter 3 introduction to algorithms handouts (with notes)Chapter 3 introduction to algorithms handouts (with notes)
Chapter 3 introduction to algorithms handouts (with notes)mailund
 
An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...
An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...
An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...ijsc
 
AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...
AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...
AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...ijsc
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)irjes
 
Lexisearch Approach to Travelling Salesman Problem
Lexisearch Approach to Travelling Salesman ProblemLexisearch Approach to Travelling Salesman Problem
Lexisearch Approach to Travelling Salesman ProblemIOSR Journals
 
A Framework for Self-Tuning Optimization Algorithm
A Framework for Self-Tuning Optimization AlgorithmA Framework for Self-Tuning Optimization Algorithm
A Framework for Self-Tuning Optimization AlgorithmXin-She Yang
 
Optimised random mutations for
Optimised random mutations forOptimised random mutations for
Optimised random mutations forijaia
 
Artificial Intelligence in Robot Path Planning
Artificial Intelligence in Robot Path PlanningArtificial Intelligence in Robot Path Planning
Artificial Intelligence in Robot Path Planningiosrjce
 
Are Evolutionary Algorithms Required to Solve Sudoku Problems
Are Evolutionary Algorithms Required to Solve Sudoku ProblemsAre Evolutionary Algorithms Required to Solve Sudoku Problems
Are Evolutionary Algorithms Required to Solve Sudoku Problemscsandit
 
A Genetic Algorithm Problem Solver For Archaeology
A Genetic Algorithm Problem Solver For ArchaeologyA Genetic Algorithm Problem Solver For Archaeology
A Genetic Algorithm Problem Solver For ArchaeologyAmy Cernava
 
EuroAD 2021: ChainRules.jl
EuroAD 2021: ChainRules.jl EuroAD 2021: ChainRules.jl
EuroAD 2021: ChainRules.jl Lyndon White
 

Similar a OCT-20 (20)

GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
 
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMCOMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
 
Comparative study of different algorithms
Comparative study of different algorithmsComparative study of different algorithms
Comparative study of different algorithms
 
Chapter 3 introduction to algorithms handouts (with notes)
Chapter 3 introduction to algorithms handouts (with notes)Chapter 3 introduction to algorithms handouts (with notes)
Chapter 3 introduction to algorithms handouts (with notes)
 
An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...
An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...
An Optimum Time Quantum Using Linguistic Synthesis for Round Robin Cpu Schedu...
 
AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...
AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...
AN OPTIMUM TIME QUANTUM USING LINGUISTIC SYNTHESIS FOR ROUND ROBIN CPU SCHEDU...
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)
 
Lexisearch Approach to Travelling Salesman Problem
Lexisearch Approach to Travelling Salesman ProblemLexisearch Approach to Travelling Salesman Problem
Lexisearch Approach to Travelling Salesman Problem
 
A Framework for Self-Tuning Optimization Algorithm
A Framework for Self-Tuning Optimization AlgorithmA Framework for Self-Tuning Optimization Algorithm
A Framework for Self-Tuning Optimization Algorithm
 
Optimised random mutations for
Optimised random mutations forOptimised random mutations for
Optimised random mutations for
 
Artificial Intelligence in Robot Path Planning
Artificial Intelligence in Robot Path PlanningArtificial Intelligence in Robot Path Planning
Artificial Intelligence in Robot Path Planning
 
T01732115119
T01732115119T01732115119
T01732115119
 
CSC2983
CSC2983CSC2983
CSC2983
 
G
GG
G
 
Are Evolutionary Algorithms Required to Solve Sudoku Problems
Are Evolutionary Algorithms Required to Solve Sudoku ProblemsAre Evolutionary Algorithms Required to Solve Sudoku Problems
Are Evolutionary Algorithms Required to Solve Sudoku Problems
 
final paper1
final paper1final paper1
final paper1
 
Algorithm
AlgorithmAlgorithm
Algorithm
 
A Genetic Algorithm Problem Solver For Archaeology
A Genetic Algorithm Problem Solver For ArchaeologyA Genetic Algorithm Problem Solver For Archaeology
A Genetic Algorithm Problem Solver For Archaeology
 
DAA-Module-5.pptx
DAA-Module-5.pptxDAA-Module-5.pptx
DAA-Module-5.pptx
 
EuroAD 2021: ChainRules.jl
EuroAD 2021: ChainRules.jl EuroAD 2021: ChainRules.jl
EuroAD 2021: ChainRules.jl
 

OCT-20

  • 1. pg. 1 An introduction to Packed Computation as a new powerful approach to dealing NP-Completes Mr. Kavosh Havaledarnejad Icarus.2012@yahoo.com. Abstract: This text is a guideline booklet for solving NP's, NP-Completes and some NP-Hards by reducing ( mapping ) to RSS problem and using Packed Computation. This text is outline of a broad research conducted from 2008 to now 2014. RSS ( k, 2 - CSP ) is a NP-Complete problem as other NP- Completes can reduce to it. RSS problem can solve very fast ( at least in average case ) by many algorithms defined by Packed Computation Idea. Researcher tried many months day and night to creating a mathematical proof showing algorithms are polynomial or exponential and tested many ideas and visions but this process is not successful yet. Whole of this text devoted to RSS and Packed Processing. However aim is proposing a new approach in designing algorithms. Keywords: Time Complexity, Complexity Classes, , Exact Algorithms, Combinatorics, Randomized Algorithms, Rules States Satisfiability Problem, Polynomial Reductions, Packed Computation, Packed State Process, Packed State Stochastic Process 1. Introduction In this paper we will introduce a new powerful approach in designing algorithms specially for dealing NP-Complete problems. Also we introduce a new NP-Complete problem and introduce four algorithms for solving this new problem. Experimental results show that for average case ( random generated objects ) these algorithms are very fast and have low complexity. It is not clear that these algorithms for this NP-Complete problem are generally polynomial or exponential. One may prove that these are polynomial ( NP = P ) or prove that they are exponential ( there are counterexamples). However we focus in proposing this new powerful device to dealing hard problems. In practical view, Researching regarding Optimization problems and NP-Hards is not a true approach to finding equality of ) or polynomial algorithms for NP-Problems because researcher never knows algorithm will reach to the best result or is only a greedy that reaches to a local optimum. Researching regarding Decision problems we don’t know they have at least one result or not is not a true approach too, because researcher never knows: If algorithm returns "NO" , algorithm is not correct or problem have not result. The approach which used among this research was researching regarding NP-Problems we know they have at least one result. First researcher was working on Map Coloring problem. It is a special case of RSS problem and it is a NP-Complete and it is a special case of Graph Coloring but, surprisingly we know: "If the graph be Planar then problem have at least one result". Also there is such a property for RSS instances. In 2.3 we explain how we can produce a random RSS instance as we be sure that it has at least one result. In such a case if algorithm returns YES we know algorithm is correct and if algorithm returns NO we know algorithm is not correct. However we can solve Optimization Problems. We can transform Maximum Clique to RSS and then solve it. Among this research 4 principles was observed. 1- Try to test more and more instances. 2- Try to designing new ideas to stochastic generating more and more difficult and none-symmetric random instances as worth cases. 3- Designing algorithms be able to cover all of them. 4- Mathematical Analysis for designing algorithm for establish contradiction examples. Final step is the step that can causes mathematical proof for equality of but this process is not yet successful. The method which used to finding algorithms for solving NP-Completes very fast was itself a local search hill climbing method handled by researcher. Every algorithm has some neighbors that a few differ from the prime algorithm that some of them are acceptable and some not. And some of them may be simpler than prime algorithm. Tues we can find the algorithm that has the best performance or we can find simplest algorithm for mathematical proof that researcher believe is PSP-Alpha. Chapter 2 completely reviews Rules States Satisfiability ( RSS ) problem and its reductions. A RSS problem instance consists of several rules that each rule consist of several states. Every rule can stands in only one single of its
  • 2. pg. 2 states. In two different rules, two different states of them have either connection or conflict ( have not connection ). Question is: Is there a collection of states in rules as we choose only one single state in each rule as all of them, be connected to each other. An instance of RSS problem in fact is an instance of Maximum Clique in a multipartite graph that its nodes are in several clusters ( Independent Sets ). As in each cluster nodes are not connected to each other. We call every cluster a rule and call its nodes, states. Only one difference exists between RSS instances and Maximum Clique instances: When we are solving a RSS Instance we note that, which states are in a same rule but when we are solving Maximum Clique instances we don’t note that, which nodes are in a same cluster. In fact we forget clusters even if they ever exist. In another vision, A General Boolean Satisfiability instance includes disjunction of several Boolean functions. This instance will Satisfy when all Boolean Functions Satisfy. When a Boolean function has n Boolean variable, it can stands in different states that some of them are acceptable and some not based on function. Based on intersection of variables between different Boolean functions, some of these states in different functions have conflict and some not. We call two states in two functions connected when they have not conflict. We call Boolean functions, Rules. Then there is a reduction from General Boolean Satisfiability instances to RSS instances. Chapter 3 introduces a new idea in designing algorithm, a new family of algorithms namely Packed Computation. When an elements of a problem consists of several states that a correct result can select only one single of them, An algorithm based on Packed Computation is the algorithm that sometimes select only one single of states which is called single state but sometimes select several states which is called packed state. It is like that when I am in home eating breakfast, I be in university studying and I be in car driving. In other word classical search algorithms only search and revisit candidate results of problem but a packed computation search algorithm revisits some condition between them ( combination of them ). In chapter 3.1 some exact polynomial algorithms based on this approach exists. In chapter 3.2 some randomized algorithms exist. They are Stochastic Processes and Markov Chains ( see [3] & [6] ) They are Multivariate Markov Chain ( see [8] ) . When an algorithm is randomized, it means its behavior depends on sequence of random numbers that are injected to it and however there are some such sequences that algorithm don’t work for them. Tues always there exist a very law probability that algorithm don’t reaches to result. But however we can prove that they have a polynomial expectation ( mathematical mean ) in [6] Chapter 4.5.3 Sheldon M. Ross introduces such a mathematical proof to show that expected number of steps to reach out to a result for a local search algorithm using 2-SAT problem is polynomial and = . We call such algorithms Zero Error and say they are in ZPP complexity class. In chapter 4 complexity tables of all algorithms can be found. Experimental results show that all algorithms have a complexity near polynomial at least in average case in other word they able to solve all instances in very fast. However an algorithm with exponential running time can be more efficient than a polynomial algorithm, for moderate instance sizes. For problems with small dimensions, an algorithm with complexity of O( ) is faster than O( ) (see [7] Introduction). However when a complexity is exponential in infinity will be bigger than another polynomial complexity algorithm. Importance of polynomiality of algorithms is when we deal with very large instances. However algorithms that proposed in this paper have good performance in practice. Exact algorithms that will introduce in this paper are both polynomial thus for proving NP = P by this algorithm, One must prove that they are correct. But two other Randomized algorithms have any limited and they will terminate whenever they reaches to a result thus for proving NP = ZPP by this algorithms one must prove that their mean running time is polynomial. However analysis of these algorithms to showing they are polynomial or exponential is a hard task and an open conjecture. Here we only propose them. 1.1. Research Story The Primitives algorithm which were implemented and tested by researcher was Energy Systems. They were the systems that worked using real numbers defined between 0 and 1 as a type of uncertainty. It seems is like Neurocomputation. They worked out a great number of random inputs but there existed some instances that they couldn't solve them so researcher began to combining ideas with randomized local search algorithms. Outcome was a system which worked by real numbers but in randomized computations. Then researcher eliminated this type of uncertainty of real numbers and designed the systems which used 3 levels: 0, 1 and 0.5. Along these researches researcher was working on General Boolean Satisfiability problems and Map Coloring. Then researcher reduced
  • 3. pg. 3 Boolean SAT to RSS problem and algorithms which used a multi selection for states as a type of uncertainty. But algorithms were yet randomized. Finally researcher designed some exact algorithms. 2. RSS Problem This chapter devoted to RSS Problem and other NP-Complete problems totally. We'll start off illustrating NP-Complete problems then purpose RSS-Problem as a new NP-Complete problem by reduction from General Boolean Satisfiability and then pursue the scope by some reductions from other NP-Complete problems to RSS Instances. All NP-Complete problems can reduce or transform to each other in a process that takes a polynomial time and all of them can reduce to RSS-Instances in a process that takes a polynomial time. Tues If there exists, an algorithm for solving RSS-Instances in polynomial time complexity then all we do is transforming other NP-Complete problems to RSS in a process that takes a polynomial time and then solve it in another process that takes a polynomial time ( see [5] ). Tues prime problem can solve in polynomial computational steps. For example if we transforms problem α to problem β then: ) ) ) 2.1. NP-Completes and NP-Hards In computational complexity theory, NP-Hard problems are a set of computational problems introduced by Richard Karp in his paper in 1972 entitle "Reducibility among Computational Problems" [2]. Richard Karp used Stephen Cook's 1971 theorem published in a paper entitles "The Complexity of Theorem-Proving Procedures" [1] that the Boolean Satisfiability Problem is NP- Complete. A problem ρ is in NP class if it is not polynomial solvable but a solution for it can be polynomial verifiable. A problem ρ is in NP-Hard class if every problem in NP can reduces to it. A problem is in NP-Complete class if it be in both classes ( See [9] ). Beauty of NP-Complete problems is that they can map to each other in polynomial time. This process is called Reduction. In other word an instance of a problem A can transform to a problem B if both be NP-Complete. There are many reductions for NP-Complete and NP-Hard problems. In Karp's paper 21 NP-Hards and reductions among them introduced (Fig.1). Fig. 1 Reducibility among Karp's 21 NP-Hard Problems Up until now, among these years, many efforts conducted to cope with NP-Completes and NP-Hards. There are many heuristic, randomized, greedy and other types of algorithms for solving NP- Completes and NP-Hards. Some of them use exponential time to solve problem. Some of them find a very optimize result but don’t guarantee that always find the best. Some of them work good for many
  • 4. pg. 4 instances of a problem but don’t work for some special cases. As an example in the book of "Randomized Algorithms and Probabilistic Analysis" [3], Eli Upfal and Michael Mitzenmacher introduced a randomized algorithm for solving Hamiltonian Cycle in chapter 5.6.2 that it have even a mathematical proof that show the probability that algorithm doesn’t reach to a result for a random problem instance and a random behavior of algorithm both together is bounded by: 2.2. Introducing RSS by reduction from General Boolean Satisfiability In this session we introduces Rules States Satisfiability problem by a simple example. Consider we have four Boolean variables namely: α, β, γ and δ then we want to Satisfy four different logical functions on these variables. We call them A, B, C and D. We have: ) ) B: β δ C: α δ Function A simply, implies that only one of its operands α, β or γ can be One and others must be Zero. We can show it by: ) All of these functions must be Satisfied thus we can show all of these functions in a formula ) ) ) ) It is in fact an instance of Boolean Satisfiability problem. Question is finding a Satisfying assignment that whole of the functions Satisfies. Simply we can find out that this instance of Boolean Satisfiability have only one Satisfying assignment or a result that Satisfies it. It is: But we know OnlyOne can stands only in 3 different states. They are : and and we call them respectively and . But function can stands only in 2 different states. They are and we call them and . But function can stands only in 2 different states. They are and we call them and . And finally function also can stands only in 2 different states. They are and we call them and . now we call functions : rules. Whereas in rule in state then this state have contrary by state e in rule where . Thus we see some states in some rules have contrary. When two states in two different rules have contrary we say they have conflict but if not we say: they are connected. Tues we have a new problem. We can show it in a diagram ( Fig. ? ):
  • 5. pg. 5 Fig. ? Diagram of instance In this new problem question is that we must choose only one single state in each rule as they have not conflict with each other. This is an example of Rules States Satisfiability problem. In fact RSS is similar to a Multivariate State Machine. Definition 2.2.1. A RSS instance is a problem consists of rules which every rule consists of states. In two different rules two states are connected or have conflict. Question is: Is there a Satisfying assignment of one single state per each rule as all of them be connected to each other or not? If answer is true: what is this assignment? In fact we reduced a Boolean Satisfiability instance to a RSS instance. It is obvious if we can solve this RSS instance then we solved the prime problem immediately. Now simply we can find out that Result of RSS instance is . because they are all connected to each other. Now if we transform this result to prime problem we have : Thus we can write a 4 step algorithm for reducing Boolean Satisfiability instances to RSS instances. Algorithm 2.2.1. Consider a General Boolean Satisfiability problem with Boolean variables and Boolean functions defined on them. Everything which a clause in 2-SAT problem do is canceling a case in problem. In this problem a bit can stands only in zero state and one state. If we expand the number of states per each bit then 2-SAT problem simply will convert to RSS problem. As each disconnection ( conflict ) obviously defines a twofold canceling clause. In addition RSS problem is a special case of CSP ( Constrain Satisfaction Problem ). As when in ( a, b )-CSP: a denotes the number of different values each variable can select from and b denotes the number of variables every constraint can be defined on them, RSS is a ( d, 2 )- CSP ( see [10] ). 2.3. Other Reductions In this chapter some reductions from other NP-Complete problems to RSS will purposes. 2.3.1. Graph Coloring is a special case of RSS Graph coloring is a NP-Hard problem. Question is finding minimum number of colors we can use to coloring an arbitrary graph with vertexes. We can break this problem to smaller decision problem that question is that for every : can we color the graph by colors? If these sub problems be solvable in polynomial time then prime problem is polynomial solvable. Consider a Decision Graph Coloring problem with vertexes. Question is coloring a graph with vertexes as no adjacent vertexes have same colors and we can use maximum colors. It is immediately a RSS instance. As we can call vertexes: rules and call colors: states. Two states in two rules have conflict if they denote same colors in two corresponding vertexes, establishing the reduction. 2.3.2. Reducing 3-SAT to 3-RSS 1- Please consider 𝑀 rules for RSS instance corresponding to 𝑀 Boolean functions. 2- In each rule: please write all states of its corresponding function ( A function with x Boolean variable have 𝑥 different states ) . 3- Please cross out the states that they are not true in the definition of their functions ( For example in x 𝑦 , the cases 01 and 10 are not true in the definition of the function ) . 4- Please connect states in different Rules if they have not conflict based on shared Boolean variables.
  • 6. pg. 6 Fig ? Reducing from 3-SAT to RSS ( Lines show conflicts ) There exists a famous reduction from 3-SAT to Maximum Clique instance ( see [5] ) but a special case of maximum clique as all nodes of graph stand in separate Independent Sets of size 3. Tues if we consider these independent sets are rules with 3 states we have a RSS instance. Two nodes are connected by an edge if (1) they correspond to literals in the same clause, or (2) they correspond to a variable and its inverse for example this Formula is transformed into the graph of (Fig. ?). ) ̅ ̅) ̅ ) ̅ ̅) 2.3.3. Maximum Clique is not immediately a RSS Consider a social network on an internetwork. Some people are in friends list of each other we can model this in a friendship graph like . Question is what is maximum clique of people they all know each other. This is an example of Maximum Clique problem. Every RSS problem can immediately be a Maximum Clique problem by omitting and eliminating rules. If RSS has rules then result of Maximum Clique cannot be bigger than because it must select every node from every independent sets ( rules ). And result of Maximum Clique is result of RSS but reverse of it is not true means: a RSS problem in general cannot be immediately a RSS problem. A counterexample for this matter is a pentagonal as maximum clique of a pentagonal is 2 but it cannot configure in justly 2 rules ( It can configure in at least 3 different rules where don’t show a RSS solution ). Note that rules must be separate Independent Sets. Moreover if such a configuration exists, how we must find it? What is maximum size of Maximum Clique? 2.3.4. Reducing Maximum Clique to RSS We explained a Maximum Clique is not immediately a RSS. But in this session we review a method for transforming a Maximum Clique instance to a RSS instance. Let be a Maximum Clique instance with nodes . We can consider rules corresponding to nodes: every rule has two states on state and off state. As for each two nodes that are disconnected from each other we consider a conflict between on-on states of corresponding rules. Such a structure Satisfies the condition for being a Clique. As two rules that their corresponding nodes are not connected, cannot be in on state both together means: the RSS configuration Satisfies if and only if on rules denote a Clique in the graph. But this structure is not enough for being Maximum Clique. It covers all Cliques of problem but even a single node can be a Clique and it Satisfies this configuration but is not a Maximum Clique. Tues we must attach an structure to this structure that this new structure Satisfies if and only if structure are showing a Clique of size . Then we can find the Maximum Possible Clique by step by step test on size of Cliques from 1 to . We must design a structure for computing summation of rules that are in on state. We design this structure by considering rules , … where is number of nodes of the Graph. consists of states: . consists of states: . And every consists of states )
  • 7. pg. 7 ) ) ) . We expect shows the number of on states. Then in each stage of test of a -Clique we can fix state of and test for solving RSS instance. Now we must design the relation between these rules. For we must configure the relation between and and . But and are Boolean thus they can configure their states denoting 4 cases ( I use the word case for separating it from word states on rules ). Summation of two of them is 1 ( on-off and off-on ) and one of them 2 ( on-on) and one of them 0 ( off-off ). We connect 0 state in to off states of and . We connect 2 state in to on states of and . We connect 1a and 1b states in arbitrary one to on of and off of and other to vice versa. Now we characterized relation between and and . Now is summation of and . For every we must find its relation with and a new rule as be summation of and . consists of states ) ) ) ) . But for example 1a and 1b are similar for thus we can say consists of states ) ) means it have states and consists of states on and off. Whereas is summation of with a new Boolean rule, range of is from 0 to . One edition for 0, One edition for and two edition for others. We connect 0 state in to off states of and 0 state of . We connect state in to on state of and state of . For other states of for example for state there is two edition we connect arbitrary one of them to off state of and state of and other to on state of and state of . Now is summation of first 's. with following this process we can configure be summation of all 's denoting size of maximum clique then by fixing state of in we can test problem have a Maximum Clique of size or not. There is some other kind of creating such an attachment structure for Maximum Clique. For example we can compute summations with a binary tree. But here let us compare number of generated states for RSS configuration with number of nodes of prime Maximum Clique with nodes. Obviously number of 's is and thus number of states of 's is . It is obvious that every has states. Tues number of generated states is: ∑ ∑ ) ) When we dealing with a large graph it is almost equal to If necessary time for solving a RSS instance with entirely states ideally be then necessary time for computing a test for k-clique is and whereas we must do testes for finding a Maximum Clique. Necessary time for solving a Maximum Clique is But we can use binary search instead of testing all sizes of cliques in such a case complexity is equal to . It means if then Maximum Clique, Independent Set and Vertex Cover are in NP-Complete class and are in P class!!! 2.3.5. Other problems It is a known principle that all problems in NP can reduce to 3-SAT and SAT problem and we explain how SAT and 3-SAT can reduce to RSS. Independent Set and Vertex Cover that are not NP- Complete and are NP-Hard can be immediately a Maximum Clique instance ( See [5] ). 3-SAT can reduce to 3-Colorability that is immediately a RSS instance ( See [5] ). For reduction from Hamiltonian Circuit to SAT consists of OnlyOne and NAND function see [11] ( I don’t accept the main goal of paper ). N-Queens problem is immediately simply a RSS instance if every column be a Rule. 2.3.6. Reducing RSS to 3-RSS
  • 8. pg. 8 Fig. ? Expanding a 4 state rule to two 3 state rule A k-RSS instance is a RSS instance with at most k states per each rule. We can reduce a k-RSS instance to a 3-RSS instance where every rule has at most 3 states. Consider a rule with m states. We can divide these states to two sets a and b where: | | | | | | | | Then we can replace old rule with 2 new rules that first consists of (a) and a new extra state and second consists of (b) and a new extra state. Extra states have conflict together and states of (a) and states of ( ) have conflict together too ( Fig. ?) . Tues system must select only one of states of ( ) or states of ( ) means mechanism works. We continue this division to access to the case that all rules have 3 states. Also based on following theorem this reduction is polynomial relative to parameters of first problem. Theorem 2.3.6.1. For every arbitrary division process ( selecting states to be in or sets arbitrary ), for a rule consists of m states, number of generated objects ( new rules ) is – . Proof: Proof is by strong induction. Let prediction ) be true if and only if for every arbitrary division process ( selecting states to be in or sets arbitrary ), for a rule consists of m states, number of generated objects ( new rules ) is – . Base Case: ) is true because it is itself an allowable rule and is one single and . Inductive Step : Assume that prediction is true for all . We may divide m states to sets and observing formula 2.3.6.1. Then we know first rule have | | states and second has | | states but based on 2.3.6.1. | | and | | . Tues | | ) and | | ) are true. Tues they must generate new objects with number respectively | | and | | . And summation of them is: | | | | | | | | Proving the theorem. When we are involving with a large instance with rules and states per each rule, after reduction we have ) variables and states. Let denotes size of prime problem and denotes size of second problem we have ( It is also a higher bound for it ): ) ) 2.4. Producing Random RSS Instances In introduction session we mentioned researching regarding a decision problem when we don’t know it contains a valid result or not, is not a true approach to settling question because we don’t know if algorithm will fail, algorithm is incorrect or problem have any result. This session explain the algorithm is able to construct a random RSS instance containing at least one result. For conducting this for a RSS instance consists of rules each rule consists of justly states, we first assume that there exist a result like We can show this result by sequence when ) ) and we can select every randomly as Satisfy this formula. It is obvious that for every couple , state of rule and state of rule are connected to each other means they have not conflict. For other couples of states in problem they are connected to each other with probability that is called density of connection. We can do it by this algorithm: Algorithm 2.4.1. Consider a RSS problem with rules and states per each rule. 2.3.6.1
  • 9. pg. 9 It is clear that this algorithm guarantees that problem space consists of at least one result but doesn’t guarantee that it hasn’t more than one result ( other stochastic results ). It is clear that we can consider more than one result in the problem space and do this algorithm ( Author found out in practice that they produces worst cases ). 2.5. Worst Cases In this partition of paper we introduces some worst cases that are a good shibboleth for testing correctness of algorithms many simple algorithms and many complex algorithms that seemed to be correct turned out to be false by these worst cases. All algorithms that will proposed in this paper can solve these worst cases very fast. 2.5.1. Worst Case 1 This worst case is a pattern that can stand in the problem structure or a problem can consists of several of these patterns. We show dimensions of this pattern by respectively size of pattern based on number of states × number of rules. Tues this pattern is designed in consequent rules. First we divide them to ⁄ consequent pair of rules. Note that in the last session we assumed there exist a result like and thus every rule have a state like that belongs to the assumed global result. For each pair consists of rules and we do this works: and have two states that belongs to global states, we call them and thus we select other states for each of them uniformly at random and call them belonging to and belonging to . Then for each we assume a conflict between ) and ) and ). Also we connect every and to whole the states belonging global results that this pattern covers whole them ( other relations are arbitrary connected or disconnected based on definitions of previous session ). This pattern can cover whole the system where or can repeat in system or be a part of system. This worst case fails many simple algorithms that worked good for average case. 2.5.2. Worst Case 2 This worst case is simply a problem with multi assumed results. When we assume result randomly in problem structure and connect other states randomly the produced problem have at least results. Experimental results show that a problem with and arbitrary with assumed results and connection probability is a very powerful worst case. 2.5.3. Worst Case 3 This worst case is similar to Worst Case 2 but instead of assuming perfect results we assume several abort results. In each abort result we assume to rules and . We assume a conflict between state of that stands in and state of that stands in and connect whole the other states of . These abort results will deceive the algorithms. 2.5.4. Worst Case 4 This worst case is the hardest known worst case for proposed algorithms. Although solving it, is simple for some algorithms but solving this worst case is hard for Classical Heuristics, Packed Heuristics and Packed Exact Algorithms. For creating this worst case we first select two rules uniformly at random. For each relation between states and where is a part of global result and is not: if and are both in and they will be connected and if they are not, they have conflict. For each relation between and where are not a part of global result: if and are both in and they will have conflict and if they are not, they will be connected. Theorem 2.5.4.1. It is obvious that the only result in such structure is . 1- For every 𝑥 𝑛 select a value for 𝐺 𝑥 randomly between to 𝑚 2- Connect all of them to each other. 3- Connect all other states with probability 𝑃 or Disconnect them with probability 𝑃 4- Forget what was the G
  • 10. pg. 10 Proof. For whole the system except and we only have 2 choice. We can select whole states in rules from or we can select whole them from out of because of every state from and from out of have conflict. Analysis by Cases. There are two cases: 1. If we choose whole states of rules except and from then: For and we have only one selection: We must choose them from because of states out of in and have conflict with states in other rules. Thus is a result. 2. If we choose whole states of rules except and from out of then: They have conflict with states from in and but also whole states out of in and have conflict to each other thus there is any result in such a case. 3. Packed Computation Idea The term packed computation means conducting computational steps in packed data. In a classic algorithm every variable ( here a rule ) can select only one of its states but in a packed computation algorithm every variable ( here a rule ) can select more than one state as its current state. In a classic algorithm when we assemble all variables we say we have a candidate result or a global state but in a packed processing algorithm when we assemble all variables we will have a set of candidate results or global states. We call this situation a Packed Global State. For example if candidate results are . We say a Packed Global State is a Packed Result if every Global State belonging its Global States set be a correct Result. In a RSS problem if in a Packed Global State, No active states have conflict, it is a Packed Result as it is a set of many correct results. Definition 3.1. A variable ( here a rule ) is in a packed state if where its valid states be α, β, γ, δ, ... , It be in a combination of them like αβ, αγ, αβγ, ... . when a variable is in one state only it is deterministic but when it is in packed state it is none-deterministic. Definition 3.2. A packed global state or packed candidate result is combination of variables they are in a single state or are in packed state in such a situation the candidates that system is visiting is obtained by cross multiplication in the state sets of all variables. Definition 3.3. A packed global state is a packed correct result if whole of its sub-candidates be so correct results. 3.1. PSP family ( Exact Algorithms ) In this session we will introduce 3 exact algorithm based on packed computation. Definition 3.1.1. A process belongs to Packed State Process class if it be based on packed computation as variables can visit more than one of their states along the computation and be an exact algorithm. 3.1.1. Basic Actions Before introducing exact algorithms we must first introduce their basic actions. These basic actions are common in whole of them and also I guess there are many other algorithms that work by these basic actions and perhaps many of them are all correct. Action 3.1.1.1. GeneralReset : After this action whole of rules stand in a packed state where they select whole of their valid states. For example if in a 3-RSS every rule contains states α, β, γ, then after GeneralReset whole of them stand in αβγ state. In such situation system contains whole of possible candidates. Action 3.1.1.2. LocalReset ( x ) : After this action rule x, stands in a packed state where it select whole of its valid states. This situation is independent from what this rule was before doing this action. Definition 3.1.1.1. Priority of a state in a rule depends on the condition rule is in. When a rule is in a packed state means it select more than one state priority of its states is 2. When a rule is in a single
  • 11. pg. 11 state priority of its only one state is equal to 1. It is obvious that when a state is off in a rule means it don’t select this state priority of this state is 0. Action 3.1.1.3. LocalChangeDo ( I, J, s ) : In this action system checks if there exist one or more states like t in rule J that they are On and they have conflict with state s of rule I then it turns off state s of rule I if and only if priority of state t be lower or equal to priority of state s. Means a state with priority 2 cannot cancel a state with priority 1. Action 3.1.1.4. LocalChangeReset ( K ) : In this action system checks if rule K contains any On state means this rule is empty, It will reset the rule. Resting a rule means it stands in whole of its valid states like LocalReset ( K ) . 3.1.2. PSP-Alpha PSP-Alpha algorithm is a sequence of basic actions proposed in prior session that led to a packed result and is an exact algorithm. Complexity of this algorithm is for a RSS instance with rules, each rule containing states. For a 3-RSS instance when is number of rules and for a 3-SAT instance reduced to 3-RSS when is number of clauses complexity is This complexity is the complexity of worse cases. However algorithm usually converges faster than this. Let divide body of algorithm to 2 partition Tier and Scan. Scan reviews problem completely and use Basic Actions. This review has some parameters. Tier call different scans and give them some parameters these parameters will change along different scans ( Fig. ? ). Fig. ? Different tiers of algorithm Algorithm 3.1.2.1. Scan Consider a RSS instance with rules labeled from each rule containing states labeled from . Every LocalChangeDo effects on a point of problem on state s of rule I we call it Destination Address of action and this effects on it based on a rule like J we call it Source Address of action. are input parameters of a scan that they changes along different scans. It is obvious that are some shifts on the visiting source rules, visiting destination rules and visiting destination states of destination rules and also is a parameter in this process that its range is between when c is 1 algorithm counts J like 0, 1, 2, … when c is 2 algorithm counts J like 0, 2, 4, … , 1, 3, 5, … when c is 3 algorithm counts J like 0, 3, 6, … 1, 4, 7, … 2, 5, 8, … and so on. Parameter is also new and do a shift on state visiting but based on number of destination rule ( I ).
  • 12. pg. 12 Algorithm 3.1.2.2. PSP-Alpha ( Tier ) : The outer loop that is a repeat is only a simple repeat but other loop produce different parameters for scans 3.1.3. PSP-Beta This algorithm is similar to PSP-Alpha only the parameters that will change in scans are different: Worse case complexity of this algorithm is that for a 3-RSS or a reduced 3-SAT is Algorithm 3.1.3.1. Scan. In this process are input parameters of a scan that they changes along different scans. Parameters ) ) are new and produce more permutation on state visiting based on number of source rule ( J ) and inversing site of motion ( when we have 3 state per each rule ). 1- Count 𝑖 from to 𝑛 do{ 2- 𝐼 𝑖 𝑎 ) 𝑚𝑜𝑑 𝑛 3- 𝐿𝑜𝑐𝑎𝑙𝑅𝑒𝑠𝑒𝑡 𝐼 ) 4- 𝑗𝑗 𝑐 𝑎𝑛𝑑 𝑗𝑠 5- Count 𝑗 from to 𝑛 do { 6- 𝑗𝑗 𝑐 7- If 𝑗𝑗 > 𝑛 then 𝑗𝑠 𝑎𝑛𝑑 𝑗𝑗 𝑗𝑠 8- 𝐽 𝑗𝑗 𝑏 ) 𝑚𝑜𝑑 𝑛 9- If 𝐼 𝐽 do { 10- Count s from 0 to 𝑚 do { 11- 𝑆 𝑠 𝑠 𝑠 𝐼 ) 𝑚𝑜𝑑 𝑚 12- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝐷𝑜 𝐼 𝐽 𝑆 ) 13- } 14- } 15- } 16- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝐼 ) 17- } 1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡 2- Repeat these commands n time 3- Count 𝑎 from to 𝑛 and do{ 4- Count 𝑏 from to 𝑛 and do { 5- Count 𝑐 from to 𝑛 and do { 6- Count 𝑠 from to 𝑚 and do { 7- Count 𝑠 from to 𝑚 and do { 8- 𝑆𝑐𝑎𝑛 𝑎 𝑏 𝑐 𝑠 𝑠 ) 9- Check if it is result 10- } 11- } 12- } 13- } 14- }
  • 13. pg. 13 Algorithm 3.1.3.2. PSP-Bata ( Tier ) : 3.2. PSSP family (Randomized Algorithms) In prior sessions some exact algorithms proposed. They work very fast in practice specially for average cases but worse case complexity of them is great. In this session we review two randomized algorithms they work slow but complexity of them is small: PSSP-I, PSSP-II. Basic Actions of prior algorithms was Conflict-Base as they eliminate every conflict when they visit them in a regular rhythm but PSSP-I is a Connection-Base algorithms. They expand packed candidate based on new states that they are connected to current states. Since we expect whole of configuration is connected ( If not we divide problem to 2 new sub problems and solve it ), such a process drive system to initial state very fast ( the case all states in all rules be On ). Tues as a restrictor rule sometimes system 1- Count 𝑖 from to 𝑛 do{ 2- 𝐼 𝑖 𝑎 ) 𝑚𝑜𝑑 𝑛 3- 𝐿𝑜𝑐𝑎𝑙𝑅𝑒𝑠𝑒𝑡 𝐼 ) 4- 𝑗𝑗 𝑐 𝑎𝑛𝑑 𝑗𝑠 5- Count 𝑗 from to 𝑛 do { 6- 𝑗𝑗 𝑐 7- If 𝑗𝑗 > 𝑛 then 𝑗𝑠 𝑎𝑛𝑑 𝑗𝑗 𝑗𝑠 8- 𝐽 𝑗𝑗 𝑏 ) 𝑚𝑜𝑑 𝑛 9- If 𝐼 𝐽 do { 10- Count s from 0 to 𝑚 do { 11- 𝑆 𝑠 𝑠 𝑠 𝐼 𝑠 𝑠 𝑠 𝐽 ) 𝑚𝑜𝑑 𝑚 12- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝐷𝑜 𝐼 𝐽 𝑆 ) 13- } 14- } 15- } 16- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝐼 ) 17- } 1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡 2- Count 𝑎 from to 𝑛 and do { 3- Count 𝑏 from to 𝑛 and do { 4- Count 𝑐 from to 𝑛 and do { 5- Count 𝑠 from to 𝑚 and do { 6- Count 𝑠 from to 𝑚 and do { 7- Count 𝑠 from to 𝑚 and do { 8- Count 𝑠 from to 𝑚 and do { 9- 𝑆𝑐𝑎𝑛 𝑏 𝑐 𝑠 𝑠 𝑠 𝑠 ) 10- Check if it is result 11- } 12- } 13- } 14- } 15- } 16- }
  • 14. pg. 14 selects one single of On states uniformly at random to dominating this expansion as an Anti-Thesis. PSSP-II is similar to prior algorithms but is randomized. 3.2.1. Basic Actions Let us define some new Basic Actions. New algorithms use these new basic actions or priors. Action 3.2.1.1. ConnLocalSearch ( X ) In this action system checks all On states of X: If state s of X is On but it is not connect to at least one On state in every rule then system turns off s. In other word for state s of rule X, if there exist one rule like Y that no On state of Y be connected to s, system turns off s in X. Action 3.2.1.2. RandomSelect ( X ) After this action rule X select only one of states that was active before executing this action uniformly at random. For example if X is αβ after this action it will become α or β uniformly at random. 3.2.2. PSSP-I An Asynchronous PSSP (a heuristic local search) PSSP-I is a randomized algorithm based on Packed Processing. It has 4 deferent editions. Complexity of this algorithm is that for a 3-RSS or 3-SAT is Algorithm 3.2.2.1. PSSP-I Following box proposes whole 4 editions of PSSP-I algorithm: In designing this algorithm selecting line 5 ( command 1 ) or 7 ( command 2 ) and selecting line 11 ( command 1 ) or 13 ( command 2 ) is arbitrary thus this algorithm have 4 different editions respectively PSSP-I-11, PSSP-I-12, PSSP-I-21, PSSP-I-22. 3.2.3. PSSP-II A Censored Asynchronous PSSP This algorithm is similar to exact algorithms proposed in session 3.1. Theoretical complexity of this algorithm is that for a 3-RSS or 3-SAT is Algorithm 3.2.4.1. PSSP-II 1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡 2- Count 𝑐 from to 7 and do { 3- Count 𝑥 from 𝑡𝑜 𝑛 and do { 4- Do one of these commands arbitrary 5- 𝑅 𝑥 ( command 1 ) 6- Or 7- 𝑅 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑛 ( command 2 ) 8- 𝐶𝑜𝑛𝑛𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒 𝑅 ) 9- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝑅 ) 10- Do one of these commands arbitrary 11- 𝐸 𝑇𝑟𝑢𝑒 𝑖𝑓𝑓 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 ( command 1 ) 12- Or 13- 𝐸 𝑇𝑟𝑢𝑒 𝑖𝑓𝑓 𝑐 𝑚𝑜𝑑 ( command 2 ) 14- If 𝐸 𝑇𝑟𝑢𝑒 𝑡 𝑒𝑛 15- 𝑅𝑎𝑛𝑑𝑜𝑚𝑆𝑒𝑙𝑒𝑐𝑡 𝑅 ) 16- } 17- Check if it is a result 18- }
  • 15. pg. 15 4. Experimental Results and Complexity Diagrams In this session we review complexity of algorithms in practice based on executed instructions. We only count instructions of basic actions of innermost loops thus we only count complexity of LocalChangeDo for PSP-Alpha, PSP-Beta and PSSP-II and ConnLocalChange for PSSP-I. But every LocalChageDo contains instructions because it must check all states of a rule have conflict with a single state of a rule or not and every ConnLocalChange contains instructions because it must check all states of all rules with all states of a rule. These experiments are based on exponents of like Beauty of this approach is that it contains both small instances and large instances. Positive tests in small instances show that correctness property of algorithms is not an asymptotic behavior. Positive tests in large instances show that how much algorithms are powerful. These experiment give us a sequence of number of instructions ( average case or maximum case ) . For two consecutives of them and , if size of prime be size of second is . Let us assume that complexity of system is a polynomial in form . We have: { Solving this system obtain: ( ) ( ) Thus we obtain a sequence of exponents . We can find estimated exponent by this formula: ̅ ( ) ∑ It give us estimated complexity we can show it like ̅ . But we can compute a Deviation for it. Deviation show how much practical exponents deviate from this estimation. We have: ̅) ( ) ∑ | ̅ |) PSP-Alpha m = 3 Density = 0.5 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 28.26 72 100% 128 1523024.64 7461504 100% 1- 𝐺𝑒𝑛𝑒𝑟𝑎𝑙𝑅𝑒𝑠𝑒𝑡 2- Count 𝑐 from to 𝑛 𝑚 and do { 3- 𝑥 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑛 4- 𝑦 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑛 5- If 𝑥 𝑦 then do 𝑚 time 6- { 7- 𝑠 𝑟𝑎𝑛𝑑𝑜𝑚 𝑚𝑜𝑑 𝑚 8- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝐷𝑜 𝑥 𝑦 𝑠) 9- 𝐿𝑜𝑐𝑎𝑙𝐶 𝑎𝑛𝑔𝑒𝑅𝑒𝑠𝑒𝑡 𝑥 ) 10- } 11- Check if it is a result 12- } 4.1 4.2 4.3 4.4
  • 16. pg. 16 4 292.68 1512 100% 256 6163084.8 22325760 100% 8 4964.4 21168 100% 512 22345989.12 98896896 100% 16 16200 108000 100% 1024 86737305.6 358262784 100% 32 50353.92 366048 100% 2048 347496099.84 1924245504 100% 64 339655.68 1342656 100% 4096 1458255052.8 6944071680 100% AVG CPX = AVG DEV = 0.797 WRS CPX = WRS DEV = 0.847 ( Table. 1 ) PSP-Alpha m = 3 Density = 0.8 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 18 18 100% 128 3735141.12 15215616 100% 4 108 108 100% 256 12990067.2 35251200 100% 8 1789.2 11592 100% 512 52533089.28 209567232 100% 16 39333.6 354240 100% 1024 222217205.76 971080704 100% 32 170256.96 830304 100% 2048 957217812.48 3622109184 100% 64 902845.44 3302208 100% 4096 3266732851.2 12982394880 100% AVG CPX = AVG DEV = 0.883 WRS CPX = 7 WRS DEV = 1.393 ( Table. 2 ) PSP-Beta m = 3 Density = 0.5 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 23.22 54 100% 128 3609319.68 25164288 100% 4 291.6 432 100% 256 15081638.4 98115840 100% 8 6325.2 59472 100% 512 54346199.04 390878208 100% 16 22140 362880 100% 1024 238056192 1367055360 100% 32 123652.8 1946304 100% 2048 813088051.2 5621815296 100% 64 751161.6 6277824 100% 4096 4321929830.4 26417664000 100% AVG CPX = 7 AVG DEV = 0.808 WRS CPX = WRS DEV = 1.121 ( Table. 3 ) PSP-Beta m = 3 Density = 0.8 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 18 18 100% 128 10134478.08 48280320 100% 4 117.72 216 100% 256 51525504 384238080 100% 8 3144.96 196560 100% 512 210532654.08 920683008 100% 16 102621.6 935280 100% 1024 870861404.16 3582627840 100% 32 482558.4 5383584 100% 2048 2727900979.2 26562134016 100% 64 2273080.32 8237376 100% 4096 12333275136 51174789120 100%
  • 17. pg. 17 AVG CPX = AVG DEV = 1.055 WRS CPX = WRS DEV = 1.691 ( Table. 4 ) PSSP-I-11 m = 3 Density = 0.5 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 36 36 100% 128 147456 147456 100% 4 220.32 1440 100% 256 589824 589824 100% 8 1388.16 3456 100% 512 2359296 2359296 100% 16 3133.44 6912 100% 1024 9437184 9437184 100% 32 11059.2 27648 100% 2048 37748736 37748736 100% 64 36864 36864 100% 4096 150994944 150994944 100% AVG CPX = AVG DEV = 0.412 WRS CPX = WRS DEV = 0.875 ( Table. 5 ) PSSP-I-11 m = 3 Density = 0.8 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 36 36 100% 128 1940520.96 15925248 100% 4 144 144 100% 256 2058485.76 14745600 100% 8 2004.48 5184 100% 512 3279421.44 14155776 100% 16 102850.56 294912 100% 1024 9437184 9437184 100% 32 332144.64 1179648 100% 2048 37748736 37748736 100% 64 1503313.92 4718592 100% 4096 150994944 150994944 100% AVG CPX = AVG DEV = 1.210 WRS CPX = WRS DEV = 1.454 ( Table. 6 ) PSSP-II m = 3 Density = 0.5 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 85.59 135 100% 128 151661.97 281448 100% 4 1267.11 3456 100% 256 343351.71 633285 100% 8 8318.97 23382 100% 512 753932.61 1227906 100% 16 12842.55 23571 100% 1024 1608875.46 2623779 100% 32 27247.05 62154 100% 2048 3627271.26 5305608 100% 64 59828.76 151443 100% 4096 7491891.42 11388519 100% AVG CPX = AVG DEV = 0.793 WRS CPX = WRS DEV = 0.946 ( Table. 7 ) PSSP-II m = 3 Density = 0.8 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 40.5 54 100% 128 755826.93 2334366 100%
  • 18. pg. 18 4 411.48 729 100% 256 1585949.49 3375405 100% 8 7677.18 29619 100% 512 3623290.92 8070651 100% 16 62231.76 211626 100% 1024 7667312.58 11737359 100% 32 152787.06 611685 100% 2048 17269367.94 24926724 100% 64 293816.16 667008 100% 4096 36573756.66 63574875 100% AVG CPX = 7 AVG DEV = 1.107 WRS CPX = WRS DEV = 1.336 ( Table. 8 ) PSSP-II m = 3 Worst Case 4 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 32.67 54 100% 64 43315.83 2103165 100% 4 575.37 1944 100% 128 50681.97 82053 100% 8 3301.29 15336 100% 256 110779.92 186003 100% 16 6849.9 25758 100% 512 229022.37 303480 100% 32 10408.5 82161 100% 1024 480603.51 607878 100% AVG CPX = AVG DEV = 1.082 WRS CPX = WRS DEV = 2.059 ( Table. 9 ) PSSP-II m = 3 Worst Case 2 Number of tests = 100 n AVG WRS Success n AVG WRS Success 2 42.66 81 100% 16 22631.67 180630 100% 4 285.12 432 100% 32 1916037.72 51227991 100% 8 3262.68 6237 100% 64 126764167.59 1950143904 100% ( Table. 10 ) Tables 1 to 8 show complexity of algorithms in practice for average case ( AVG ) and worse case ( WRS ) happened in practice and Tables 9 and 10 show experimental results for worst case 2 and 4 for PSSP-II algorithm however experimental results show that whole algorithms can solve whole worst cases but in bigger polynomial complexity. Density is density of connections. Number of tests per each size is = 100. AVG CPX is estimated complexity based on average case data. AVG DEV is deviation of its exponent. WRS CPX is estimated complexity based on worse case data. WRS DEV is deviation of its exponent. These tables don’t cover all subversions of algorithm ( only one of them ). Researcher found out whole algorithm work correct with different permutations for tire loops. Thus each of them have many versions. More tests may be better. Note that an instance need for bytes of memory for connectivity graph. For example for and , essential memory is 2.4 Giga Bytes! I am sorry that it cannot take place in used 64-bit language programming ( In theory a todays 64-bit system can address a memory of size equal to 16,000 petabytes means 16,000,000 terabytes and target computer hard disk is 1 terabyte ). (Table. 11) show a conclusion about all algorithms. Based on experimental results many complexities are around . It is also the faster time that a process can verify correctness of a solution. Note that an efficient algorithm at least must sometimes check produced result is a correct solution or not thus: Definition 4.1. There is not an efficient algorithm that can solves a problem in a complexity smaller than the complexity that is necessary to verifying a solution for the problem and this time for 3-RSS instances is
  • 19. pg. 19 We can analyze a 3-SAT instance based on number of variables or number of clauses . Each clause cancels one state for 3 variables. Number of whole possible clauses for a 3-SAT is equal to ) ) If density of clauses be ⁄ then expected number of clauses is ) ) ) If we use such an approach to randomized 3-RSS used in this research number of whole possible conflicts is ) that is number of used states for the assumed result and is ⁄ ) Thus: ) . if density of conflicts be ⁄ then ) > √ ) It holds that if then ) and also if complexity of a problem based on be this complexity based on is . Thus for example if complexity of PSSP-I-11 in worst case in ⁄ is this complexity based on is that is linear. Complexity of PSSP-II in practice for worst case is . But for Worst Case 4 is bigger than . Complexity of this algorithm is less than means complexity is less than complexity of visiting whole problem data once or verifying it. Note that we didn’t count instruction of verification checks. However it can happen in practice but is not a theoretical complexity. Algorithm Type Theoretical Complexity for m-RSS Theoretical Complexity for 3-RSS Average Practical Complexity Worst Practical Complexity Succ ess Perfor mance PSP-Alpha Exact 7 100% slow PSP-Beta Exact 7 100% slow PSSP-I-1 Random 100% fast PSSP-II Random 100% fast Table. 13 Let us assume that the practical complexities that we estimated them be exponential. Tues must be a constant α that complexity to be . Tues based on 4.2 we have: ( ) ( [ ] [ ] ) ( [ ] ) Based on 4.3 we have: ̅ ( ) ∑ ( [ ] ) ( ) ) Based on 4.4 we have: ̅) ( ) ∑(| ̅ ( [ ] )|) If we solve: ̅ ( [ ] ) > > Thus:
  • 20. pg. 20 ̅) ( ) ) ( ) ) ( ) ) And: √ ( ) Contrary is that if we assume then deviation must be at least greater than ̅) that is a large number and we cannot compare it with experimental results. In the other hand worst case deviation of PSSP-II is almost 2 then if it be an exponential, α must be at most smaller than √ that is too small to being a root for an exponential and if it be thus is a very good exponential complexity with a very small root. 5. What about all results In some practical uses probably we are interest in obtaining all results of problem or driving system to an arbitrary result. Algorithms as far as producing result are a Yes and No test for a NP-Complete problem thus we can fix variables on different states and test is it satisfiable or not? and with a step by step testing we can find all results of Let assume that complexity of doing a test be Assume there exists a result like . The worst case is that for each rule algorithm start from first state but result be in last state thus we must do test for all states of all rules step by step. Thus a higher bound for finding a result is . If we have g deferent results a higher bound for finding whole of them is . That is a polynomial time. 6. What about problems based on profit Consider a problem based on profit like TSP or Knapsack where is size of problem and is number of digits that every weighted property have at most. If we test problem have a result with a profit at least , this problem is NP and thus can map to a RSS instance. Let complexity of solving this problem be . But maximum profit in such a system cannot overstep of summation of all weighted objects that is . But based on well-known fact about Boolean algebra number of digits of is . Thus if we do a binary test we can find the result in complexity at most ) . That is a polynomial time. 7. Conclusion This paper contains a new idea in processing hard problems: Packed Computation. This scope is now virgin and people can create many new ideas in scope. Importance of packed computation is that it can cop NP-Completes. Researcher believes that human must find methods for solving all problems very fast. However the worst case complexity of these algorithms is an open problem. 8. Acknowledgements I thank Dr. Mohammad Reza Sarshar and Dr. Shahin Seyed Mesgary in university of Karaj and Sina Mayahi and Professor Albert R. Meyer university of MIT and others who helped me conducting these researches. 9. References and More Study 1- The Complexity of Theorem-Proving Procedures, Stephen A. Cook, University of Toronto 1971 2- Reducibility among Combinatorial Problems, Richard M. Karp, University of California at Berkley 1972 3- Probability and Computing Randomized Algorithms and Probabilistic Analysis, Michael Mitzenmacher and Eli Upfal, Cambridge 4- Mathematics for Computer Science, Eric Lehman and F Thomson Leighton and Albert R Meyer
  • 21. pg. 21 5- 21 NP-Hard problems, Jeff Erikson, 2009 6- Introduction to Probability Models Sixth Edition, Sheldon M. Ross, ACADEMIC PRESS San Diego London Boston New York Sydney Tokyo Toronto 7- Exact algorithms for NP-hard problems: A survey, Gerhard J. Woeginger, Department of Mathematics university of Twente, P.O. Box 217 8- Higher-order multivariate Markov chains and their applications, Wai-Ki Ching , Michael K. Ng, Eric S. Fung, Elsevier, Linear Algebra and its Applications 9- Exact Algorithms for Hard Graph Problems (Algoritmi ESATti per Problemi Di_cili su Gra_) Fabrizio Grandoni, Universit_a degli Studi di Roma Tor Vergata" 10- Improved Algorithms for 3-Coloring, 3-Edge-Coloring, and Constraint Satisfaction, David Eppstein 11- Polynomial Solvability of NP-Complete Problems, Anatoly Panyukov ( Note that proof is not acceptable )