# Parallel Graph Coloring Algorithms and an Implementation of Jones-Plassmann

Graph Coloring Algorithms

Previous Work

For any planar graph, 4 colors. But for non-planar, more colors are needed….known as an NP-Hard problem.

Sequential algorithms: Largest-Degree-First, Smallest-Degree-Last can be readily parallelized.

Parallel Algorithms

Based on the simple observation that any independent set of vertices can be colored in parallel, independent set is a set of vertices that no two of them are neighbors. Shown in the graph below:

Graph partitioning problem…

Jones-Plassmann

Given below:

An example is here:

The lowest available colour: the smallest color that has not already been assigned to a neighboring vertex.

The Largest-Degree-First

Very similar to Jones-Plassmann, the only difference is, initially not random choosing.

First find out the degrees.Random numbers are only used to resolve conicts between neighboring vertices having the same degree.

Coloring happened in an order of decreasing degree.

It is aimed to use fewer colours than the J-P one.

” A vertex with i colored neighbors will require at most color i+ 1. The Largest-Degree-First algorithm aims to keep the maximum value of i as small as possible throughout the computation, so that there is a better chance of using only a small number of colors.”

Jones-Plassmann Implementation

TinkerPop 3 with Java: Jones-Plassmann graph coloring:

Please Find Jones-Plassmann in my Github

So basically, the Gibbs Sampling is transferred to a Graph Coloring problem, which is, how to separate a graph into groups, with the neighors are exclusive.

After I finished an evaluation, I will update this post 🙂

This is the learning notes of A Comparison of Parallel Graph Coloring Algorithms (Reference).

Update on 20th, Nov, 2019

It seems I never finished evaluation…..

Keep calm and update blog.

## 2 thoughts on “Parallel Graph Coloring Algorithms and an Implementation of Jones-Plassmann”

1. Good post. It is also worth mentioning the history of Jones-Plassmann, which is essentially a simple modification of Luby’s parallel MIS algorithm. Jones-Plassmann actually presented their algorithm in asynchronous terms, arguing that creating a new random number at each iteration would be a bottleneck. We mostly know now that it isn’t much of a deal so we can just use the standard Luby formulation.

Like

1. Hi Aydin,

Thanks for the comment! I like your idea for the asynchronous view wow!

The amazing part for me is the parallelization, and I am always curious about approximately processing data in time-series. But I still need some time on catching up the maths lol I hope we can share more perspectives in the near future!

Thanks,
Irene

Like