Abstract—We describe a paradigm for designing parallel algorithms via approximation, and illustrate it on the b-EDGE COVER problem. A b-EDGE COVER of minimum weight in a graph is a subset C of its edges such that at least a specified number b(v) of edges in C is incident on each vertex v, and the sum of the edge weights in C is minimum. The GREEDY algorithm and a variant, the LSE algorithm, provide 3=2-approximation guarantees in the worst-case for this problem, but these algorithms have limited parallelism. Hence we design two new 2-approximation algorithms with greater concurrency. The MCE algorithm reduces the computation of a b-EDGE COVER to that of finding a b0-MATCHING, by exploiting the relationship between these subgraphs in an approximation context. The S-LSE is derived from the LSE algorithm using static edge weights rather than dynamically computing effective edge weights. This relaxation gives S-LSE a worse approximation guarantee but makes it more amenable to parallelization. We prove that both the MCE and S-LSE algorithms compute the same b-EDGE COVER with at most twice the weight of the minimum weight edge cover. In practice, the 2-approximation and 3=2-approximation algorithms compute edge covers of weight within 10% the optimal. We implement three of the approximation algorithms, MCE, LSE, and S-LSE, on shared memory multi-core machines, including an Intel Xeon and an IBM Power8 machine with 8 TB memory. The MCE algorithm is the fastest of these by an order of magnitude or more. It computes an edge cover in a graph with billions of edges in 20 seconds using two hundred threads on the IBM Power8. We also show that the parallel depth and work can be bounded for the SUITOR and b-SUITOR algorithms when edge weights are random.