Quantcast
Viewing all 40121 articles
Browse latest View live

DELHISTR - Editorial

Angry Battu

Problem:
practice

Tags: Divide and conquer.

Author: Shami.

The problem is just a paraphrase around computing inversions in an array. An inversion is a pair of elements in an array such that A[i] > A[j] when i < j.

The brute force way is to count all pairs. Complexity: O(N^2). This is not good enough for SubTask2. Given that it requires comparing two elements, the lower bound will be same as lower bound for comparison sorting method i.e. Nlog(N).

Let’s break the array in two parts – left and right - and count the number of inversion in both. It still does not give us much information about number of inversions between left and right parts. However, if we sort both the parts, and then merge them, we can definitely tell the relative ordering between left and right. This algorithm works just like merge sort with an extra counter for inversions.


Time Complexity of std::lower_bound() vs container::lower_bound() is NOT ALWAYS SAME!

I was solving this November Lunchtime 2017 problem: https://www.codechef.com/problems/LRQUER and after getting a lot of TLEs, I learned the following:

std::lower_bound(<container>.begin(), <container>.end(), query) and <container>.lower_bound(query) does not always have the same time complexity.

To be specific, in containers that support Random Access Iterators (std::vector, std::array, etc.), they'll have the same time complexity but not if the container don't support them (std::set, etc.). I was using std::multiset in my solution.

Please read this comment in Codeforces to learn more: http://codeforces.com/blog/entry/20553?#comment-252670

This is also mentioned in Complexity section of std::lower_bound() in cppreference website: http://en.cppreference.com/w/cpp/algorithm/lower_bound

Help in FRUITS (beginner section)

Please help me in GOODBAD

This Program is giving the correct output in codeBlocks but showing wrong answer in codechef.

Problem Statementhttps://www.codechef.com/problems/GOODBAD

My Program:-

#include<iostream>
using namespace std;
int main()
{
int T,N,h,K,countC,countS;
string s;
cin>>T;
cout<<endl;
while(T>0){
cin>>N>>K;
cin>>s;
h=s.length();
countC=0;countS=0;
for(int i=0;i<h; i++){
if(isupper(s[i])==1)
countC++;
else countS++;
}
if (countC==K)
cout<<"chef";
else if (countS==K)
cout<<"brother";
else if(K==0){
if(countC==N)
cout<<"brother";
else
cout<<"chef";
}
else if (countS>K &&countC<K)
    cout<<"chef";
    else if (countC>K &&countS<K)
    cout<<"brother";
    else if(countS<K&&countC<K)
cout<<"both";
else cout<<"none";
T--;
}
return 0;
}

BLREDSET - Editorial

PROBLEM LINK:

Practice
Contest

Author:Praveen Dhinwa
Tester:Misha Chorniy
Editorialist:Animesh Fatehpuria

PROBLEM

You are given a tree $T$ with $N$ vertices. A vertex can be either black, red, or uncolored. There is at least one black and at least one red vertex. Compute the number of subsets of vertices $W$ such that:

  • Each vertex in $W$ is uncolored.
  • $W$ is a connected subgraph of $T$.
  • If you remove all the vertices in $W$, there will be at least one pair of vertices $(u, v)$ such that $u$ is colored black and $v$ red and there is no path from node $u$ to node $v$.

Output your answer modulo $10^9 + 7$. Constraint: $N \le 10^5$.

PREREQUISITES

Basic Familiarity with Tree DP.

EXPLANATION

There are many possible solutions to this problem, all of which use some sort of dynamic programming on trees. Thus, it is required to have some level of familiarity with that topic. If you are not familiar with DP on Trees, check out thisblog.

We want to compute number of connected subgraphs of uncolored vertices that disconnect at least one pair of different-colored nodes. We call a vertex $v$ good iff $v$ is uncolored and the removal of $v$ disconnects at least one pair of different-colored nodes. We want to compute number of connected components in $T$ of uncolored vertices that contain at least one good node. This is equivalent to computing all possible connected subsets, and then subtracting those connected subsets (of uncolored nodes) that have no good nodes. It turns out that this can be done with a straightforward dynamic program.

Precomputing all Good Nodes

Root the tree arbitrarily. Compute for each node $v$ two quantites, $red[v]$ and $black[v]$ denoting the number of red and black vertices in the subtree rooted at $v$ respectively. This can be done with one DFS. To check if a node $u$ is good, consider $T - \{u\}$. A node $u$ is good iff:

  • $u$ is uncolored.
  • There are at least two components with colored nodes in $T - \{u\}$.

Thus, we can check all its neighbours of $u$ and see if there are at least two neighbours whose components have colored vertices by using our precomputed lists $red[v]$ and $black[v]$. Using this approach, we can precompute all good nodes in amortized $O(n)$ time.

Computing Number of Connected Subgraphs with no Good Nodes

For this step, we require to do a tree DP. Again, we root the tree arbitrarily, say at node $r$. Let $dp[v][0]$ denote the number of valid subgraphs in the subtree rooted at $v$ that don't include $v$, and let $dp[v][1]$ denote the number of valid subgraphs in the subtree rooted at $v$ that include $v$. The answer is $dp[r][0] + dp[r][1]$.

The transitions are fairly straightforward and I'll leave it as an exercise. As a hint, note that the transition depends on the color of the node. If you are struggling to find these transitions, please go through the blog mentioned above. It explains the basic ideas of Tree DP with some very good examples. If you're still stuck, please read some available solution.

Since there are $O(n)$ states (and the transition is $O(1)$), the total time complexity is $O(n)$.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.
Tester's solution can be found here.

help in optimization solution of a spoj problem.

Runtime error NZEC

For the code in https://www.codechef.com/viewsolution/16977724 I am getting the error mentioned in the title.

The code runs fine in eclipse on my laptop.

Also I am reading the input in the same way as done in the above program and getting the output in 0.4 to 0.6.

Please advise where I am going wrong and how can I avoid this error in future.

BLOCKING - Editorial

PROBLEM LINK:

Practice
Contest

DIFFICULTY:

Easy-Medium

PREREQUISITES:

Graph theory, Graph matchings, Stable Marriage Problem

PROBLEM:

There are N people and N houses. Each person will visit each house at some time according to a predefined schedule. We need to find a distinct house for each person so that when that person visits that house, he is locked in there. Further this allotment should be done in such a way that once person A is locked in house H at time T, no other person visits house H after timeT.

Your task is to find such an allotment of houses.

QUICK EXPLANATION:

Model this problem in stable marriage problem. Use Gale-Shapley algorithm to solve it.

DETAILED EXPLANATION:

This is one of my personal favorite problems from this contest. At its heart it is a semi-standard problem which is taught during several algorithm courses but is concealed nicely.

The best way I can think of to explain the solution of this problem is to directly introduce the Stable Marriage Problem and then model our problem in terms of SMP.

Let's say there are N men and N women. Each man has rated each women and each women has rated each man. Now we want to make N couples out of these N men and N women matching a man to a woman.

We also want our match making to be stable. What does that mean? Let's sayM1 is married to W1 and M2 is married to W2. If M1 prefers W2 overW1 and W2 prefersM1 over M2, M1 and W2 could as well break their current marriages and marry with each other. So we want our match making to be stable such that no man and no woman have an incentive to break their marriages and marry each other. More formally for all (M1, W1) and (M2, W2) pairs, either M1 prefersW1 over W2 or W2 prefers M2 over M1. It can be shown that a stable marriage system always exists.

Now when we know what is SMP, let's start to model this problem. All friends can become men and all houses can become women. This part is simple. What we want here is also a matching. But we need to create preferences of both people and houses and that too in such a way so that constraints of this problem change into constraints of stable marriage problem.

Let's say a friend F prefers house H1 over house H2 if he visitsH1 first in his own schedule. Also house H prefers friend F1 over friend F2 ifF1 visits H after F2 does.

We'll now prove that constraints of the problem are identical to the constaints of SMP and any solution set of SMP is same as solution set of this problem. Let's look at this image before moving forward.


Claim 1 : If an allocation exists that matches constraints of the problme, that allocation is also a solution to SMP assuming the preferences defined above.

Proof : Assume if possible this allocation is not a solution to SMP. This implies there exists matched pairs which can be broken to marry each other. Let's say F1 and H2 want to marry after breaking their current marraiges(F1,H1) and(F2, H2). What it means is that F1 prefers H2 over H1 and alsoH2 prefers F1 over F2. This means that : T3 < T1 and T3 > T2

These two together mean that T2 < T3 < T1. But this is a contradiction to the fact that given allocation satisfied constraints of the given problem. Why? Because F2 has settled in houseH2 at time T2. F1 settles at time T1 and as T3 < T1, he goes to houseH2 at time T3 but at that time both F1 and F2 would be in house H2.

Hence our assumption was wrong => Every solution to given problem is a solution to SMP.

Hence proved.


Claim 2 : Every solution to SMP under the preference set as described above is a solution to our problem as well.

Proof : Assume this is not the case. There exists some SMP solution which is not a solution to our problem => In this solution some friend visits other friend after he has settled. Without loss of generality, assume that F2 is the friend and he goes to home H1 where F1 is already settled. As F2 is still moving, he has not settled => T4 < T2. Also as F1 has already settled, T1 < T4. These two together imply that T1 < T4 < T2. But this violates the fact that given allocation was a solution to SMP problem as now F2 and H1 can break their current marriages and marry mutually.

So our assumption was wrong => every solution to SMP is indeed a solution to our problem.

Hence proved.


From claim 1 and claim 2, solution set of this problem is same as solution set of stable marriage problem. You can read about SMP in detail at Wikipedia or your favorite algorithm text book.

Of course one din't need to know SMP explicitly to solve this. Our tester wasn't aware of SMP and he still came up with a similar greedy solution on his own :)

SETTER'S SOLUTION:

Can be found here.

TESTER'S SOLUTION:

Can be found here.

SIMILAR PROBLEMS:

Stable Marriage Problem on Spoj


SNTEMPLE - Editorial

PROBLEM LINK:

Practice
Contest

Author:admin2
Testers:Hasan Jaddouh
Editorialist:Pawel Kacprzak

DIFFICULTY:

Easy

PREREQUISITES:

Binary search, precomputation

PROBLEM:

Let a magic sequence of order $m$, denoted also by $magic(m)$, be a sequence of a integers starting from $1$ and increasing by $1$ up to $m$ and then decreasing from $m$ to $1$. For example, the magic sequence of order $5$ is: $1, 2, 3, 4, 5, 4, 3, 2, 1$. Notice that $magic(m)$ has length $2 \cdot m - 1$.

Given an array of $h$ of $n$ positive integers, denoting consecutive blocks, you want to perform the smallest number operations of reducing the high of a selected block by $1$ in such a way that after all operations are performed, there exists exactly one magic sequence in $h$ and all elements of $h$ not included in the magic sequence are $0$.

EXPLANATION:

The problem can be reformulated as follows: find the largest possible integer $m$, such that there can be formed a magic sequence of order $m$ in $h$. You might wonder why this problem is equivalent to the one asked in the original statement. Well, let $s$ be the sum of all integers in $h$. In order to form the resulting sequence $magic(m)$ in $h$ and set all elements in $h$ not included in this sequence to $0$, we have to perform $s - m \cdot (m+1) - m = s - m^2$ operations, because the sum of heights in $magic(m)$ is $m^2$. Since we want to minimize the number of performed operations, we want to maximize $m$.

The second crucial observation is that if we can form a sequence $magic(m)$ in $h$, then for each $k < m$, we can also for a sequence $magic(k)$ in $h$. This follows from the fact that in order to transform sequence $magic(m)$ to $magic(m-1)$, the only thing to do is to subtract one from each of its elements and remove the left-most and the right-most elements.

Based on the above two observations, we can use binary search to find the largest $m$, such that sequence $magic(m)$ can be formed in $h$. The lower bound for binary search should be set to $1$, and the upper bound can be set to for example $n$, but one can also compute the more exact upper bound, although it doesn't matter much.

Now, how for a fixed $m$ find out if a sequence $magic(m)$ can be formed in $h$, and do it in linear time?

One possible solution is to compute two boolean arrays of size $n$:

$L[i] := true$ if and only if the sequence $1, 2, \ldots, m$ can be formed in $h[i-m+1], h[i-m+2], \ldots, h[i]$. $R[i] := true$ if and only if the sequence $m, m-1, \ldots, 1$ can be formed in $h[i], h[i+1], \ldots, h[i+m-1]$.

Then, sequence $magic(m)$ can be formed in $h$ if and only if there exists $i$, $1 \leq i \leq n$, such that both $L[i]$ and $R[i]$ are $true$. Thus the problem is reduced to computing arrays $L$ and $R$.

Let's take a closer look on how to compute array $L$. Computing array $R$ is based on the same idea.

First of all, let's set all entries in $L$ to $false$. The idea is to iterate over $h$ from left to right (right to left if you want to compute $R$) while updating variable $k$ denoting the length of the longest sequence $1, 2, 3, \ldots, k$ ending in the current element, where $k$ is at most $m$. The below pseudocode illustrates the approach:

k = 0
for i = 1 to n:
    if h[i] >= k+1:
        k += 1
        if k == m:
            L[i] = true
            k -= 1
    else:
        k = h[i]

The above idea is based on the fact that if we have a sequence $1, 2, \ldots, k$ ending in $i-1$, then if $h[i]$ is at least $k+1$ we can extend the sequence by one, and if $h[i]$ is at most $k$, then the best we can do is to set $k$ to $h[i]$, because a sequence $1, 2, \ldots, k$ ending at index $i-1$ can be transformed to a non-longer sequence $1, 2, \ldots, h[i]$ ending at index $i$ if $h[i] \leq k$.

Since the computing of arrays $L$ and $R$ takes $O(n)$ time, and the range over we perform binary search has length $O(n)$, the total time complexity is $O(n \cdot \log(n))$.

As it was pointed out with the comments, the above method can be improved to $O(n)$ solution by avoiding binary search. The idea is to compute arrays $L$ and $R$ at the beginning, without specifying $m$. We define $L[i]$ (and similarly $R[i]$) as the maximum length of sequence $1, 2, \ldots$ ending in $i$, and compute it as follows:

k = 0
for i = 1 to n:
    if h[i] >= k+1:
        k += 1
    else:
        k = h[i]
    L[i] = k

Then, after computing arrays $L$ and $R$, we know that the maximum $m$ for which magic sequence of order $m$ exists is $\max\limits_{1 \leq i \leq n} \min(L[i], R[i])$

AUTHOR'S AND TESTER'S SOLUTIONS:


Setter's solution can be found here.
Tester's solution can be found here.
Editorialist’s solution can be found here.

help in optimizing solution of a spoj problem.

DS = QUAD TREE Unofficial editorials December Long Challenge Part 3

Hello Guys,

This is the part 3 of three posts of Unofficial editorials for December Long Challenge.

For editorials of problems GIT01, CPLAY, and VK18, click here.

For editorials of problems CHEFHAM and CHEFEXQ, click here.

This post has editorials for problems REDBLUE.

I have decided to dedicated this post to a single problem because I am gonna write an editorial of a Data structure Quad Tree, followed by Editorial of REDBLUE problem.

Quad Tree

Idea:

Quad Tree is basically an extension of a segment tree. While segment tree deals with an array (or a number line), Quad Tree deals with a grid (or X-Y plane). The whole idea of Quad tree and segment tree is same, store result for a node (known as region for quad tree), combine results of children nodes to get result of current node and so on.

In Quad Tree, each region is defined as a rectangle in following two ways.

  1. Four Integers x1,y1,x2 and y2, where (x1,y1) is top-left and (x2,y2) is bottom-right corner of given region.
  2. One Point (x,y), Two Integers H and W, Point being the center of rectangle ((x1+x2)/2, (y1+y2)/2), H denote height of rectangle and W denote width of rectangle. (Or R denoting Radius, is region is circle shaped in our scenario).

Both of above can be used in different scenarios, have their own advantages and disadvantages.

First representation is easier to use, but wont work if we need quad tree in game simulation object collision if our objects are circle shaped.

I have used First representation for solving REDBLUE, and therefore, would talk about First representation only. Second representation can also be used in a similar way.

Suppose we have a region ((x1,y1) (x2,y2)) denoting top-left and bottom-right corners of region. Now, we are going to find its children regions (or sub-regions).

if midX = (x1+x2)/2 and midY = (y1+y2)/2

Four sub-regions are

  1. ((x1,y1) (midX,midY))
  2. ((midX+1,y1) (x2,y2))
  3. ((x1,midY+1) (midX,y2))
  4. ((midX+1,midY+1) (x2,y2))

Now, we can store answers for each region and solve grid queries fastly.

Talking about Time Complexity, It's time complexity would depend upon the type of query we need to answer. For example, we need to count number of points which lie below the line. A line may pass over two or three regions. So, we can just calculate region for fourth region in O(1) because all points in that region will lie only on one side of line.

In theory, the following Recurrence will define our time complexity.

T(N) = 3T(N/4) + O(1).

Practically, Quad Tree gives better run-time if points are sparse. I don't exactly know the exact upper bound of time complexity of query in Quad Tree, but in no case will it be worse than O(N). Read this thread about time complexity of Quad Tree.

Note: I have tried to explain Quad Tree in a simple way, and in that process, i haven't discussed many variations for which Quad Tree may be used. I wanted to share core idea of Quad Tree, after which you can find other variants on internet easily understandable.

The version of Quad Tree discussed above is static i.e. we don't allow insertions of new points, but its details can be easily found on internet.

Implementation:

We are basically going to use left-right sibling representation of tree, with difference that we will have 4 children instead of usual two children.

Also, we need to visualize which child hold information of which sub-region. For example, in my solution quad tree, if origin (x,y) defines center of region, child[0] region has both co-ordinates negative, child1 has x +ve, but y -ve, child2 has y +ve, but x -ve and child3 has both x and y positive. It is a matter of choice.

Suppose we are given a set of points. for every Query (no of queries <= 1e5), we are given equation of line in slope-intercept form, we need to tell number of points lying above the given line.

We will use following classes (or struct)

We will build the Quad Tree beforehand.

int[][] points denote given points.

class Point{

int x,y;

}

class Node{

Point topleft, bottomright;

Node child4;

// Additional data for region, depending upon queries asked.

ArrayList<integer> list;//storing indices of points lying in current regions, indices pointing to points array in given input.

void build(){

int midx = (x1+x2)/2, midy = (y1+y2)/2;

for(int i:list){

if(point[i][0] <= midx && point[i][0] <= midy){child[0].add(i);

//rest three inserted in same way, explained above.

for(int i = 0; i< 4; i++)child[i].build();

}

}

}

int query (//query variables){

//required query, calling child[i].query() for some regions, calculating results of queries

int count = 0.

if(check(If given ling passes through region i))count += child[i].query(slope, intercept);

else if(check(if region lies above the line))count += child[i].

return count;

}

}

That's all above Quad Tree, so now, we will move to problem REDBLUE.

Problem REDBLUE

Problem difficulty: Medium

Prerequisites: Quad Tree

Problem Explanation

Given two set of points (red and blue), delete minimum number of points such that we can draw a line in any direction, which divides both sets into separate regions.

Solution

Naive Solution:

First thing to note, I am assuming that points may lie on that line. (If this assumption is wrong, we can always shift a line very minutely to have a dividing line which doesn't lie on any points and also, doesn't affect our answer, because no three points are collinear.

Set answer to N+M.

So, Simple solution is to select one point from red points, select one point from blue set and for every pair of red point and blue point, form a line, count number of red points lying above the line, below the line, number of blue points above the line and below the line.

Now, we see that if this pair is the optimal dividing line, we need to delete minimum of

  1. red points lying above + blue points lying below line.
  2. red points lying below + blue points lying above line.

ans = Math.min(ans, min(red_above + blue_down, red_below+blue_above));

print answer. The time complexity of above solution is O(N^3), giving us 20 points.

See this solution here.

Now, See what we are doing. For every pair, we develop a line using two point form of line, and count number of points lying above and below the line. We also wee that number of points below line = Total points - number of points above line - 1 (point lying on line).

So, for every line, we just need number of red points and blue points above the line.

This is where the quad tree comes into picture. We will build two quad trees, one for red points,a one for blue points. We will just run two nested loops(one on red, one on blue), form a line for every pair of point, and use quad tree to answer queries, and print the answer.

Link to my Code.

Please share your views about my editorial, especially about Quad Tree since this is my first tutorial of a data-structure.

Enjoy coding.

CHEFEXQ - Editorial

PROBLEM LINK:

Practice
Contest

Author:Yogendra Bhanu Singh
Tester:Mugurel Ionut Andreica
Editorialist:Kirill Gulin

PROBLEM

You have an array of $N$ elements and 2 types of queries:

  1. Given two numbers $i$ and $x$, the value at index $i$ should be updated to $x$.
  2. Given two numbers $i$ and $k$, your program should output the total number of prefixes of the array with the last index $\leq i$ in which the xor of all elements is equal to $k$.

QUICK EXPLANATION

Replace the array by its prefix xors array. The queries became as follows: xor the suffix of the array with some value and find how many indexes less than a given one have a value equal to the some other given value. For doing this split the array into blocks of size $O(\sqrt N)$ and perform queries over the blocks.

EXPLANATION

0-indexation is used in the editorial. Symbol $\oplus$ means bitwise xor.

Denote $p[i]$ as bitwise xor of the first $i$ elements of the array, i.e. $p[i] = a_0$ $\oplus$ $a_1$ $\oplus$ $\dots$ $\oplus$ $a_i$. $p[i]$ is called prefix xors sums array of the array $a$ and can be calculated in $O(n)$ time using the fact $p[i] = p[i – 1] \oplus a[i]$ for $i \geq 1$. Replace array $a$ with it’s prefix xor array $p$ and perform queries over the array $p$, not $a$.

Suppose it’s need to perform a query of the second type, i.e. find the amount of prefixes of the array $a$ with its length no more than $k$ with xor on it equal to a given $x$. It’s obvious such a query in the array $a$ is equal to a query “how many numbers on the prefix with length $k$ are equal to a given $x$” in the array $p$.

If we need to perform a query of the first type it's need to recalculate the array $p$. Suppose it’s required to change value at the position $i$ in $a$ to $x$. Let’s understand how the array $p$ changes. For any $j < i$ $p[j]$ doesn’t change since the $i$-th element doesn’t affect to the prefixes before position $i$. In the same time for any $j \geq i$ old value of $a[i]$ doesn’t affect on $p[j]$ anymore, so it’s need to “cancel” its contribution in $p[j]$. Since $x$ $\oplus$ $x$ = $0$ for any $x$ we can do $p[j] = p[j]$ $\oplus$ $a[i]$, thereby saying that an old value of $a[i]$ is not affecting on $p[j]$ anymore. After that do $p[j] = p[j]$ $\oplus$ $x$ meaning we add $x$ to $p[j]$. Therefore, for changing value at position $i$ in array $a$ it’s need to do $p[j] = p[j]$ $\oplus$ $c$ for each $j \geq i$, where $c = x$ $\oplus$ $a[i]$.

So queries became as follows:

  1. Given $i$ and $x$, do $p[j] = p[j]$ $\oplus$ $c$ for each $j \geq i$, where $c$ = $a[i]$ $\oplus$ $x$.
  2. Find the amount of such $i$’s that $i \leq k$ and $p[i] = x$ with given $k$ and $x$.

It can be done using sqrt-decomposition. Split the array $p$ into blocks with $B$ = length of the each block. It means elements of the array with indexes $[0; B-1]$ belong to the block with index $0$, elements with indexes $[B; 2B-1]$ belong to the block with index $1$ and so on, elements with indexes $[\frac{N-1}{B} \cdot B, N-1]$ belong to the last block (it can contain less than $B$ elements). It’s obvious there are $O(\frac{N}{B})$ such blocks. It’s easy to see any index $i$ belongs to the block with index $\frac{i}{B}$. For each block $i$ store an additional value $t[i]$ meaning each value inside this block is xorred with $t[i]$ ($t[i] = 0$ initially), i.e equal to $a[j]$ $\oplus$ $t[i]$. In other words, for each index $j$ of the array, the actual value of $j$-th prefix xor sum is $p[j]$ $\oplus$ $t[\frac{j}{B}]$. Also each block stores an array $freq[i][j]$ meaning how many values of $p$ equal to $j$ belong to it. Before performing the queries calculate the array freq, just increasing $freq[\frac{i}{B}][p[i]]$ by $1$ for each $i$.

Now suppose query of the first type comes. Suppose index $i$ belongs to the $k = \frac{i}{B}$-th block. Set $c$ to $a[i] \oplus x$ and $a[i]$ to x. Then for any index $j$ after $i$ in the $k$-th block $p[j]$ should be changed to $p[j]$ $\oplus$ $c$. Just iterate over all such $j$’s and decrease $freq[k][p[j]]$ by $1$, increase $freq[k][p[j]$ $\oplus$ $c]$ by $1$ and do $p[j] = p[j]$ $\oplus$ $c$. In this way we updated the information in the $k$-th block in $O(B)$ time. For any block with index $p > k$ we can’t update each index independently because it’s slow, but we can simply change $t[p]$ to $t[p]$ $\oplus$ $c$ “promising” to xor it with $c$ after, when answering the queries of the second type.

For answering a query suppose $i$ belongs to the $k=\frac{i}{B}$-th block. Then it's need to check each index $j$ before $i$ inside this block. Just iterate over $j$ from the beginning of the $k$-th block to $i$ and increase the answer if $p[j]$ $\oplus$ $t[\frac{j}{B}]$ is equal to $x$ from the query. For each block $p$ before $k$, we know the frequency of each value inside it. So we can just add to the answer $freq[p][x]$, but remembering about extra xors it turns into $freq[p][x$ $\oplus$ $t[p]]$.

In each query we iterate over $O(\frac{N}{B})$ blocks and one whole block taking $O(B)$ time for it. So each query takes $O(B + \frac{N}{B})$ time. Taking $B = \sqrt N$ leads to the optimal $O(\sqrt N)$ time per query, so the array should be splitted into blocks of size around $\sqrt N$. Total time complexity is $O(Q \sqrt N)$.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.
Tester's solution can be found here.

Problem in finding size of largest subset with given xor?

Given an array A and a range range [0,L] find sum b0+b1+b2+...+bl where bi is the size of largest subset of array whose xor is i. 0<=i<=L

If there is no subset whose xor is i then bi=0. 1<=1000

what's wrong with my code of problem(TASHIFT)?

Problem in solving Kiljee and XOR problem from hackerearth.


sigtstp error

what is error code SIGTSTP means?here is the link for my code. link to my code but it works with sample case.i am not able to found wrong in my code.

TAXITURN - Editorial

PROBLEM LINK:

Practice

Contest

Authors:Praveen Dhinwa

Testers:Jingbo Shang, Istvan Nagy

Editorialist:Vaibhav Tulsyan

PROBLEM

Given $N$ points $(x_i, y_i)$ in the 2D-plane, check if any $3$ consecutive points $A(x_{i - 1}, y_{i - 1}), B(x_i, y_i), C(x_{i + 1}, y_{i + 1})$ have $\angle ABC \le 135^{\circ}$.

Constraints:

$1 \le T \le 50$

$3 \le N \le 50$

$0 \le x_i, y_i \le 50$

All $(x_i, y_i)$ pairs are distinct.

EXPLANATION

Though the problem idea was not that difficult, many people got scared by geometry may be and didn't solve it much during the contest.

The idea was very simple. First, we find whether the taxi made a sharp turn or not. If it makes a sharp turn at $i^{th}$ step, then we can try $4$ indices $j$ ($2$ before and $2$ after $i$) and try assigning them coordinates $(x, y)$ in the range $[0, 50]$ by pure bruteforce. This way, total time complexity will be $\mathcal{O}(5 * 50^2 * 50)$ per test case.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.

Tester 1's solution can be found here.

Tester 2's solution can be found here.

DISCAREA - Editorial

PROBLEM LINK:

Practice

Contest

Authors:Triveni Mahatha

Tester:Jingbo Shang

Editorialist:Vaibhav Tulsyan

PROBLEM

You are given $N$ semi-circles such that centres $c(x_i, y_i)$ of the semi-circle lie on the $X-axis$ and the semi-circle lies above the $X-axis$. Radius of the $i^{th}$ semi-circle is $r_i$. Answer $Q$ queries:

In each query, you are given a circle with centre $C(X, Y)$ and radius $R$ ($y \ge R$). Find the area within the circle that is overlapping with some part of any of the $N$ semi-circles.

Constraints:

$1 \le T \le 10$

$1 \le N \le 10^5$

$20 \le Q \le 200$

$1 \le r_i, R \le 10^5$

$1 \le x_i, y_i, X, Y \le 10^5$

EXPLANATION

Note that the desired accuracy for this problem isn't as high as typical geometry problems. Additionally, aa is quite small as well. This suggests that we can use a monte-carlo or sampling based solution.

Solution 1: Monte-Carlo Simulation

The high-level idea is as follows: - Maintain a set of segments - the max $y$ for each $x$. Note that the geometric figure is a semi-circle, whose equation can be treated as a function. - For any $x$, we need to find the maximum value attained among all these $N$ functions. - This divides the $X-axis$ into $O(N)$ segments, such that for each segment, one of the given $N$ functions attains max value. - For each query circle, iterate over arcs and add area of intersection of query circle and arc to the total area.

How do we perform segmentation of X-axis?

This can be done in multiple ways. - One of the way is to add discs one by one. Suppose at any point we know the partition (of $X-axis$) by first $(i-1)$ discs. Now we want to add disc $i$. We can binary search for the center and find its probable position. Then we can remove zero or more discs in the vicinity of the new disc if the new disc dominates the entire segment of earlier dominated by a disc. Again, note that this is analogous to convex hull trick. This is $O(N * log(N))$. - Another simpler way (implementation wise) is a divide and conquer method with same time complexity. Divide the array at middle, find those segments for left and right, now merge them. How to merge? We will find the segments incrementally from smallest $x$ co-ordinate to largest $x$. For simplicity in coding let's say the smallest $x$ is $-\infty$. Let's denote the array of segments returned by recursion due to left half be $S_1$ and right half be $S_2$. We keep a variable, $last_X$, which is initialized with $-\infty$. It denotes the $x$ co-ordinate till which we have already merged. Now we find the leftmost segment from $S_1$ which is not covered till now. That is the one, denote a segment by seg(l, r, disc), where $r$ > $last_X$. Call it $s_1$, and similarly find $s_2$ from $S_2$. Now we know that in the range $(l, min(s_1.r, s_2.r))$ one of $s_1$ or $s_2$ dominates (non other than them). We then find that out. After this step we know that we have considered everything till $min(s_1.r, s_2.r)$. So we update the $last_X$ accordingly. We repeat this till all the segments are covered from $S_1$ and $S_2$. Implementation wise till $last_X$ is less than $\infty$.

Psuedo code:

``` vector<segment> solve(discs[1...n]) : vector<segment> S1 = solve(discs[1..n/2]) vector<segment> S2 = solve(discs[n/2+1...n]) lastX = -inf i1 = i2 = 0 // S1[i1] and S2[i2] are the current segments to be considered

    vector<segment> ans; // to return

    while lastX < inf :
        while S1[i1].r <= lastX and i1 < len(S1) :
            i1 += 1
        while S2[i2].r <= lastX and i2 < len(S2) :
            i2 += 1
        update(ans, S1[i1], S2[i2], lastX, min(S1[i1].r, S2[i2].r))
        lastX = min(S1[i1].r, S2[i2].r)
    return ans

```

Generating points within query circle

We can throw $x$ random points inside the query circle and check whether that lies inside some disc or outside. Now it's just matter of estimating biasness of a coin with some confidence and some error margin. Here margin of error $(E)$ is - $0.02$. If we throw $x = 1.25 * 10^4$ points, the confidence is $1 - 10^{-5}$, which is quite high. So, we need $x * 200$ checks overall. If we sort all these checks and then iterate, all the checks can be done in O(1) amortized time.

How do we uniformly choose points within the circle?

Reference: Wolfram "To generate random points over the unit disk, it is incorrect to use two uniformly distributed variables $r$ in $[0, 1]$ and $\theta$ in $[0, 2 * \pi)$ and then take $x = r cos(\theta)$, $y = r sin(\theta)$, because the area element is given by $dA = 2 * \pi * rdr$, this gives a concentration of points in the center."

Image may be NSFW.
Clik here to view.

Let $a$, $b$ be two uniform distributions in $[0, 1]$. A random point $(x, y)$ can be generated as follows:

$x = R \sqrt{a} cos(2\pi b)$ $y = R \sqrt{a} sin(2\pi b)$

Computing the common area between the circle and semi-circles

To check whether a point $(x, y)$ is inside some disc, go to the disc which attains max value at $x$ and check if the max value is less than $y$. This takes $O(1)$ time apart from finding the right disc which takes $log(N)$.

Solution 2: Deterministic Solution using Integration

  • Using integration, every query can be answered in $O(N)$.
  • The solution involves computation of the integral of $\sqrt{R^2 - x^2}$ across all segments.
  • Eventually, we find area between each arc and the query circle.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.
Tester's solution can be found here.

SPANTREE - Editorial

PROBLEM LINK:

Practice

Contest

Authors:Sidhant Bansal

Testers:Jingbo Shang, Istvan Nagy

Editorialist:Vaibhav Tulsyan

PROBLEM

Given that there exists an undirected weighted connected graph with $N$ vertices, find the total weight of its Minimum Spanning Tree.

You can perform 2 kinds of queries:

  1. Provide 2 non-empty, non-intersectin sets of vertices $A$ and $B$ as input to the judge and get the minimum weighted edge between any pair of nodes $u \subseteq A$ and $v \subseteq B$. If there is no such edge, the judge returns -1.
  2. Send the weight of the MST to the judge.

The cost of performing a query is $|A|$. The max cost that can be incurred is $10^4$.

Constraints:

$2 \le N \le 1000$

$1 \le weight_{edge} \le 10^5$

EXPLANATION

Will Kruskal's/Prim's Algorithm work?

Primitive Kruskal / Prim will not work. Kruskal's wont work because we will perform $N^2$ queries - once for each pair of nodes, where the cost for each query will be $1$.

For Prim's the set of nodes you know the answer for, will keep on increasing. Hence, the cost for the queries would be something like: $1, 2, 3, ... , N/2, N/2, (N/2 - 1), ..., 3, 2, 1$. Hence, the cost would be of the order $O(N^2)$.

Approach

  1. Find the nearest node of each node $u$ by using a query of the format: $[u]$ vs. all except $u$.

    • This gives us connected components of size $2$.
  2. In increasing order of component size, we can continue to perform queries like: [component] vs. [all except component].

You can observe that after every query, the component's size gets doubled.

Image may be NSFW.
Clik here to view.

Since the size of a component always gets doubled, we can say that every vertex occurs as a vertex in $A$ at most $log(N)$ times. Components can be maintained using a DSU.

Each vertex contributes $1$ to the net cost. Hence, the overall cost is bounded by $O(N * log(N))$.

Since the value of $N$ is at most $1000$, our bound ensures that the max cost of $10^4$ is never exceeded.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.

Tester 1's solution can be found here.

Tester 2's solution can be found here.

GENPERM - Editorial

PROBLEM LINK:

Practice

Contest

Author:Archit Karandikar

Tester:Jingbo Shang

Editorialist:Vaibhav Tulsyan

PROBLEM

For a permutation $P = (p_1, p_2, ..., p_N)$ of numbers $[1, 2, ..., N]$, we define the function $f(P) = max(p_1, p_2) + max(p_2, p_3) + ... + max(p_{N-1}, p_N)$.

You are given $N$ and an integer $K$. Find and report a permutation $P$ of $[1, 2, ..., N]$ such that $f(P) = K$, if such a permutation exists otherwise print $-1$.

Note $f([ 1 ]) = 0$.

Constraints:

$1 \le T \le 40$

$1 \le N \le 10^{5}$

Sum of $N$ over all test cases in each file $\le 10^{6}$

$1 \le K \le 2.10^{10}$

EXPLANATION

What are the minimum and maximum values of $f(P)$ that are possible?

The permutation that gives the minimum value of $f(P)$ is $f(P_{min}): [1, 2, 3, ... , N]$ and the permutation that gives the maximum value of $f(P)$ is $f(P_{max}): [1, N, 2, (N - 1), ...]$.

$K$ must lie between $f(P_{min})$ and $f(P_{max})$ for a valid permutation to exist.

Approach

Let's start from $P_{min}$ and move towards $P_{max}$ by modifying the permutation incrementally.

Consider $f(P) = f(P_{min})$. If we swap $N$ and $(N - 1)$, we get:

$P' = [1, 2, 3, ... , N, (N - 1)]$

Observe that $f(P') = f(P) + 1$.

Similarly, if we swap $N$ and $(N - 2)$, we get permutation $P" = [1, 2, 3, ... , N, (N - 2), (N - 1)]$.

$f(P") = f(P') + 1$.

Hence, we can see that the value of $f$ increases by $1$ for every move that we do. The position of each number can be determined uniquely!

O(K) brute-force solution

We can keep performing swaps until we reach a permutation $P$ with its $f(P)$ equal to $K$.

This approach would exceed the time-limit and hence we need to find a faster solution.

O(N) solution

The change in value ($\Delta_1$) of $f(P)$ for bringing $N$ from index $N$ to $2$ is $(N - 2)$. Similarly, change in value ($\Delta_2$) bringing $(N - 1)$ to index $4$ after that is $(N - 4)$.

$\Delta_i = (N - 2.i)$

We keep constructing the permutation $[1, N, 2, (N - 1), ... , j, (N - j + 1), ...]$ until $\sum\limits_{i=1}^j \Delta_j \le K$.

Now, we need to append to the above, an arrangement of the numbers $[(j + 1), (j + 2), ... , (N - j)]$. We will need to keep moving $(N - j)$ to a previous index if $K > \sum\limits_{i=1}^j \Delta_j$, because we still need to cover the remaining value which is $K - \sum\limits_{i=1}^j \Delta_j$.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.
Tester's solution can be found here.

Viewing all 40121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>