Quantcast
Channel: CodeChef Discuss - latest questions
Viewing all 40121 articles
Browse latest View live

Buying Sweets :Solution ?

$
0
0

Buying Sweets

Sachin likes sweets a lot. So, he goes to a market of sweets. There is a row of sweet stalls. Every sweet stall has different sweets. To save some time, he decided to buy sweets from contiguous stalls. So, he can buy from as many stalls as he wants, but all of those stalls need to be contiguous. He also decided to buy only 1 kg of sweets from each of those stalls. Cost of 1 kg of sweets for each stall is given. There is a strange rule of billing in that market. And that rule is as follows- Total cost of all sweets bought is sum of the cost of all sweets multiplied by the cost of sweet he bought at the end. e.g. if he buys sweets having cost 2, 3, 4 in the same order than total cost of sweets will be 24+34+4*4=36. Now he wonders what will be the total cost of all possible ways of buying sweets. Can you help him. Because this number could be large, you should take modulo of the final result by 10^9+7.

INPUT SPECIFICATION

Your function contains a single argument- A One dimensional Integer array of Size N in which ith element denotes the cost of 1 kg sweets from ith stall. First line of input contains an Integer N denoting the size of Array. (1<=N<=10^5) Next N lines of input each containing a single integer from 1 to 9.

OUTPUT SPECIFICATION

You must return an integer- sum of the cost of all possible ways of buying sweets modulo 10^9+7.

EXAMPLES

Sample Test Case 1-Input

3 1 2 3

Output

53

Explanation

Possible ways of buying sweets are- a) 1 b) 1 2 c) 2 d) 1 2 3 e) 2 3 f) 3 cost of each of these is following- a) 11= 1 b) 12+22= 6 c) 22= 4 d) 13+23+33= 18 e) 23+33= 15 f) 33= 9

Hence total cost will be 1+6+4+18+15+9=53


Feedback on Codechef certification exam.

$
0
0

How was the Codechef certification exam?

BSONG Editorial

$
0
0

PROBLEM LINK:

Practice

Contest

Author:Chandan Boruah

Tester:Chandan Boruah

Editorialist:Chandan Boruah

DIFFICULTY:

EASY

PREREQUISITES:

Brute Force

PROBLEM:

Given two types of notes, what is the minimum number of notes to change to make all notes similar.

QUICK EXPLANATION:

Print the number of notes that is smaller or equal to the other in count.

EXPLANATION:

All the notes of one type needs to be changed to the other type. So, the minimum number of notes needed to be changed would be equal to the one lesser in count.

AUTHOR'S SOLUTION

using System;
class some
{
    public static void Main()
    {
        int n=int.Parse(Console.ReadLine());
        for(int i=0;i<n;i++)
        {
            int t=int.Parse(Console.ReadLine());
            string[]ss=Console.ReadLine().Split();
            int a=0;int b=0;
            foreach(string tt in ss)
            {
                if(tt.EndsWith("#"))
                    a++;
                else b++;
            }
            Console.WriteLine(Math.Min(a,b));
        }
    }
}

STOD editorial

$
0
0

PROBLEM LINK:

Practice
Contest

Author:Chandan BoruahTester:Chandan BoruahEditorialist:Chandan Boruah

DIFFICULTY:

EASY

PREREQUISITES:

Brute Force

PROBLEM:

Given two types of notes, what is the minimum number of notes to change to make all notes similar.

QUICK EXPLANATION:

Print the number of notes that is smaller or equal to the other in count.

EXPLANATION:

All the notes of one type needs to be changed to the other type. So, the minimum number of notes needed to be changed would be equal to the one lesser in count.

AUTHOR'S SOLUTION

using System;
class some
{
    public static void Main()
    {
        int n=int.Parse(Console.ReadLine());
        for(int i=0;i<n;i++)
        {
            int t=int.Parse(Console.ReadLine());
            string[]ss=Console.ReadLine().Split();
            int a=0;int b=0;
            foreach(string tt in ss)
            {
                if(tt.EndsWith("#"))
                    a++;
                else b++;
            }
            Console.WriteLine(Math.Min(a,b));
        }
    }
}

help in http://codeforces.com/contest/161/problem/D

$
0
0

my code :http://codeforces.com/contest/161/submission/31368822

my approach : i am taking 1 as the root and then calculate the nodes at every distance k (1 <= k <= 500) in it's subtree and then for each node i am calculating the number of nodes at every distance k without considering it's subtree. Finally i sum up all the values within the subtree and outside the subtree for the given k and as all the possible pairs are counted twice , i just make my answer half.

it is failing in test 21 please help me where i am going wrong !

CodeChef Certified Data Structure & Algorithms Programme (CCDSAP) exam experience

$
0
0

how the CodeChef Certified Data Structure & Algorithms Programme (CCDSAP) exam experience ?????

codechef ATMOQ

$
0
0

why it's showing wrong answer??

for each i ---< pi

calculated element in each connected component...

ans will be their lcm...

solution

SIGSTP Error

$
0
0

SIGTSTP error! Please help..
I have made a program which should take a string (max length 20) as input. It would output the string after removing the duplicate characters in it.
But i'm getting a SIGTSTP run-time error.

include <iostream>

using namespace std;

int main() {

int i, chS=0, chI;
char check[20];
char strig[20];

i=0;
nxtChar:
while(1)
{
    cin>>strig[i];
    if(strig[i]==13)
    {
        cout<<endl<<strig;
        exit(0);
    }

    for(chI=0; chI<chS; chI++)
    {
        if(strig[i]==check[chI])
            goto nxtChar;
    }
    check[chS++]=strig[i];
    i++;
}

return 0;

}


Fee using neft fot acm icpc

$
0
0

How to pay fee using neft for ACM icpc please help me

2 Cook Off and 2 Lunchtime in a Month..!!

$
0
0

I think that there should be 2 Cook Off and 2 Lunchtime in a Month..!!

It's advantages:

1)It helps a lot in interviews.

2)It helps a lot for ACM ICPC.

3)Day by Day we will become more stronger in problem solving.

4)We will learn how to solve given problems in a given amount of time and become master in it.

5)We will be able to think and write code more fast by more practice.

6)It will become our habit to sit for 2 to 3 hrs and solve problems and so we will not be in tension when we will be giving coding round interview.

7)Sometimes during interview we cannot solve a problem there due to tension but when we come to our room then that same problem is easily solved by us ,so to avoid these things a lot of practice option should be available.

Hence ,I wanted Codechef to conduct 2 Cook Off and 2 Lunchtime in a Month..!!

What are karma points?

$
0
0

I am new to this karma system, Can you explain me why is it called so? What has karma points got to do with contribution?

Unofficial Editorials October Long Challenge (Part 2)

$
0
0

Hello Guys

This is the part 2 of three posts of unofficial editorials for October Long Challenge.

For Solutions of problems PERFCONT, MEX and/or CHEFCOUN, click here.

For Solutions of problems CHEFCCYL and/or SHTARR, click here

This post has editorials for problems CHEFGP and MARRAYS

Problem CHEFGP

Problem Difficulty : Easy

Problem Explanation

Given a distribution of apple and banana among people and two integers a and b, we need to distribute apples and bananas such that no (a+1) consecutive people get apple and no (b+1) consecutive people get bananas.

If this is not possible with given apple and bananas, we can distribute additional kiwis (kiwis are costly, so try using as less as possible).

Solution

First thing, the given distribution is of no use to us except the number of apples and bananas to be distributed.

So, first count number of 'a' and 'b' in given distribution, say A and B.

The approach i used here is:

Case 1: A>B

while(A>B)

Distribute a apples, followed by 1 banana (or kiwi in case all bananas are distributed)

(i.e. append 'a' a times to output string, followed by one 'b' (if B > 0) or '*'. decrement A by a, B by 1 if B>0)

Then distribute remaining Apples and bananas as ababab..... till A==0 and B == 0;

(i.e. append "ab" A times to output string.)

Case 2: A < B

while(A<B)

Distribute b bananas, followed by 1 apple (or kiwi in case all apples are distributed)

(i.e. append 'b' b times to output string, followed by one 'a' (if A > 0) or '*'. decrement B by b, A by 1 if A>0)

Then distribute remaining Apples and bananas as ababab..... till A==0 and B == 0;

(i.e. append "ab" A times to output string.)

Case 3: A==B

Simply distribute as abababab...... till A==0 && B == 0

(append "ab" A times to output string)

print output string.

Case 3 approach works for a=1 and b = 1.

So this will work for every value a >= 1 and b >= 1

Case 1 works because we always try to distribute as much as possible of apple which is present in greater amount, while taking care not to place more than a by distributing one banana or one kiwi (if banana are finished), thus using minimum number of kiwis.

Same goes for Case 2.

Here's a link to my Code

Problem MARRAYS

Problem difficulty:Medium

Problem Explanation

Given an array of arrays, Maximize the following value

Summation (for i = 0 to i = N-2) += abs(A[i][last] - A[i+1][first]) *(i)

Allowed operations: Cyclic rotation within inner arrays of given arrays.

Solution

In this problem, My brute force solution got 100 points.

Amazed!! ryt...

Well, not entirely a brute force :D, but similar to one.

Created a Mapping array (LinkedHashMap in java) to store answers in following way

map[i] contains Key x mapped to value L means that Maximum answer from array N-1 to array i+1, keeping ith array in such an order that x is the first element of ith array.

Mapping here serves the purpose of weeding out trouble of handling duplicate values in array, as well as efficiency.

//The beginning

Inserted into map[N-1] two mappings

minimum value in (N-1)th array mapped to 0

maximum value in (N-1)th array mapped to 0

//because only the maximum or minimum value can give a larger answer when using in absolute operations.

int[] removal = new int[1e6+1] //removal array used to remove useless entries from map

loop from i = N-2 to i = 0{outer loop

//A[i] denote length of ith array

int min = 1e6+1, max = 0; // these store minimum and maximum values of next_value variable

for int j = 0 to j = A[i]-1 { //inner loop int currentValue = A[i].get(j); if(j == A[i]-1)nextValue = A[i].get(0); else nextValue = A[i].get(j+1)

min = Min(min, nextValue);

max = Max(max, nextValue);

for every entry e in map[i+1]{ //loop l1

put in map[i] mapping from key (nextValue) to value (Max( map[i].getOrDefault(nextValue), abs(e.key - currentValue)*(i+1) + e.value; ));

}//end of loop l

}//end of inner loop

// Now the most important part: removing useless entries from map[i], so as to improve time complexity to pass the time limit

long minAns = map[i].get(min), maxAns = map[i].get(max);

//getting best answers mapped to minimum key and maximum key in map[i]

int r = 0 // setting removable entries to 0

for Entry e in map[i]{

if(e.key != min && e.key != max && e.value <= minAns && e.value <= maxAns)removal[r++] = e.key;

//if any value in mapping has answer smaller than answers mapped to both minimum key and maximum key in map[i], there's no way that this entry can get greater answer. So discarding these values.

}

//Now removing entries

for(int k = 0; k< r; k++)map[i].removal(removal[k]);

}end of outer loop

The final answer is the maximum value stored in map[0];

There is a recursive solution too, which i leave as a quest today for my readers..

The reason above algorithm works because it considers every possible permutation, discarding useless entries immediately so as to ensure that time complexity doesn't shoot up.

Still, i feel the official editorial for this problem deserves a look...

Link to my code

Next problems are discussed in Unofficial Editorial Part 3.

Keep Coding. Feel free to ask anything.

Poor Service at Codechef

$
0
0

It is quite annoying that a Giant like Codechef has such issues. I submitted a question and it was not judged for 10 minutes. Please look into this @admin@admin2 @admin3

Unofficial Editorials October Long Challenge (Part 3)

$
0
0

Hello Guys

This is the part 3 of three posts of unofficial editorials for October Long Challenge.

For Solutions of problems PERFCONT, MEX and/or CHEFCOUN, click here.

For Solutions of problems CHEFGP and/or MARRAYS, click here

This post has editorials for problems CHEFCCYL and SHTARR

Problem CHEFCCYL

Problem Dificulty:Medium

Prerequisites: Sum Arrays would be enough :)

Problem Explanation

Find the smallest distance from Vertex v1 of Cycle c1 to Vertex v2 of Cycle c2, given a set of cycles, being ith cycle connected to (i-1)th and (i+1)th cycle by an edge.

Edges are weighted and bidirectional.

Solution

Subtask 1:

Since N, Q <= 10^3, it allows the simplest solution to pass, running dijkstra algorithm for every query. Each query takes O(N) time, resulting in overall complexity (NQ).

Clearly, this solution will fail grossly for remaining subtasks.

First defining Connecting edges : The edges which connects the cycles with each other.

Let us solve a simpler version of this problem first.

Given a cycle with N nodes, ith node being connected to two adjacent nodes, find the smallest distance.

N = 5.

dist 5 - 1 = 7

dist 1 - 2 = 3

dist 2 - 3 = 4

dist 3 - 4 = 5

dist 4 - 5 = 6

//I am used to writing sum array code this way only. Actual order of weight will be given as 3,4,5,6,7. Adjustment has been made to store array as last element, first element , 2nd, 3rd and so on...

For Every node, there are exactly two paths to visit other node. eg. node 3 can be visited from node 1 through path 1-2-3 (weight 3+4 = 7) or 1-5-4-3 (7+6+5 = 18).

Now, consider array {7, 3, 4, 5, 6, 7, 3, 4, 5, 6} // distance edge weights in serial order

generate sum array of above, say S = {0, 7, 10, 14, 19, 25, 32, 35,39,44,50 };

Now, distance from from 1 to 3

(1st path) = S{3} - S{1} = 14-7 = 7

(2nd path) = S{1 + N} - S{3} = S{1+5}-S{3} = 32-14 = 18 //N is number of nodes in given cycle.

//using curly brackets due to formatting issues

Required distance = Min(7,18) = 7

Seems magic.. ryt?? :)..

So now we can calculate the smallest distance from one node to another in

Be Sure you thoroughly understand the above solution before proceeding.

Now, the Second sub-problem is...

Given the edge weights between cycles, find the distance between two cycles...

Consider example N = 4 (Number of cycles here)

dist 4 - 1 = 7 // connecting edges between cycles, given after cycles, but before queries in input of problem

dist 1 - 2 = 3

dist 2 - 3 = 4

dist 3 - 4 = 5

Here too, the same technique. :)

Say we are asked to find distance of Vertex v1 if Cycle c1 to Vertex v3 of Cycle c3, total cycles being 4, We need to consider exactly two paths, one from Cycle c1 to Cycle c3 through c2, other one through c4.

Consider array {7,3,4,5,7,3,4,5}

Sum array {0,7,10,14,19,26,29,33,38}

distance C1 to C3

through C2 = S{C3}-S{C1} = 14-7 = 7

through C4 = S{C1+N} - C{C3} = 26-14 = 12;

Main Problem

If we consider path of any vertex in cycle c1 to any vertex if cycle c3, we need to add the weight of connecting edges C1-C2 and C2-C3, which can be added as above.

But we also need to consider the distance traveled within cycle c2, from end point of connecting edge from C1 to c2

to start point of connecting edge between cycle C2 and C3.

This can be calculated using the same technique, just constructing the sum array from base array. ( :D )

The values in base arrays are {

{0} = dist between (end point of connecting edge between Nth cycle and 1st cycle) to (src point of connecting edge between 1st cycle and 2nd cycle)

{1} = dist between (end point of connecting edge between 1th cycle and 2nd cycle) to (src point of connecting edge between 2nd cycle and 3rd cycle) }

Let us denote this as Cycle weight.

Thus, Required Distance from Vertex v1 of Cycle c1 to Vertex V2 of Cycle is

Minimum of following two

** (path 1) dist(v1 to end-point of connecting edge within (cycle c1 and c1+1) ) + E{c2}-E{c1} + V{c2-1} -V{c1} + dist( v2 to end point of connecting edge between cycle c2-1 and cycle c2)

(path 2) dist(v1 to end-point of connecting edge within (cycle c1 and c1-1) ) + E{c1+N}-E{c2} + V{c1+N-1} -V{c2} + dist( v2 to end point of connecting edge between cycle c2 and cycle c2+1) **

I know the solution is a bit complex, but I guarantee you, it's worth it. :)

Here's a link to my code

Problem SHTARR

Problem Dificulty:Medium

Prerequisites: Square-root Decomposition, binary search

Problem Explanation

Given an array A, perform two types of queries 1. i X increase A[i] by X 2. i L R count number of values staring from ith position in array such that A[j]>=L and A[j] is greater than maximum value between from A[i] to A[j-1] and maximum value between A[i] to A[j-1] < R

Be sure to understand the second query, whether You need to read it 1 time, 2 time, 10^2 time of 2^10 time :D

Solution

A Naive Solution

Create int[] next Next Greater Element Array from here

for update

update value in array

recreate the whole next array

for query { start from ith value,

long x = i;

int count = 0 while(a[x] < L)x = next[x]; // to exclude values smaller than L while(A[x] < R){

count++;

x = next[x];

}

count++; // because last value >= R }

Note: Above is just an overview of naive solution. Forgive me for errors if there are. :)

The above solution performs both update and query in O(N) time in worst case, giving overall complexity O(NQ) which will exceed Time limit.

Efficient Solution

Have a read at one of my answer to a similar problem here.

Now, The idea of SQRT decomposition is to divide the given array into sqrt(N) blocks and solving queries as well as updates in O(sqrt(N)) time.

Please refer to my code to follow explanation.

Now, every block is an unsorted array.

Sub-problem: We need to find number of values greater than X, which are strictly increasing.

For example: Say the ith block contains values {2,5,8,4,6,9}

So the list[i] will only store {0, 2,5,8,9} and size[i] = 5 // 0 is added to terminate binary search, as explained below

If Block contains values {2,5,4,6,8,9}, the list[i] will contain {0,2,5,6,8,9} and size[i] = 6.

The idea here is that, now if we need to find number of values greater than X which are greater than all previous values, we just need a binary search to find such a value.

for Block {2,5,8,4,7,9}, if we need to find values greater than 6

binary search will search within list {0,2,5,8,9} and return 3.

List stores the elements in above form.

The max array store the maximum value in a block. The size array store the size of list made from each block.

If you have understood all this, you should be able to solve the problem yourself

Now, we build blocks in this way, creating list, storing size and max value. (build function in my code)

Now, for Update operation i, X

int blockNo = i/len;

A[i] += X; //updated value in array;

updateBlock(blockNo); //updated the list

Now, for query operation i, L,R

let Long prev = L-1;

blockNo = i/len;

loop in first block{

if(a[j] > prev){

count++;

prev = a[j]; }

if(prev>=R)return count;//in case all the required values are within current block }

for(int block = blockNo+1; block < len; block++){

if(max[i] > prev && max[i] < R){ //middle blocks

count += size[i] - binarysearch(i, prev)//preform search for index of value greater than prev in ith list and add number of elements greater than prev

prev = max[i]

}else if(max[i] <= prev)continue; // blocks in which no value is greater than prev else{

//last block, because it contains a value greater than R

We search manually // or create another binary search to find index having value equal or greater than X, as in my code.

loop in current block{

if(a[j] > prev){

count++;

prev = a[j]; }

if(prev>=R)return count;//count is the required answer of query. }

break;

}

return count; // in case max (array from a[i] to a[n-1]) < R

}

I also got 10 points in Lucky Edge problem using Brute-force solution, but it's better to have a look at official solution of remaining problems, as i myself will :)...

Keep coding. Feel Free to ask anything...

Unoffcial editorials OCT17

$
0
0

A Balanced Contest

It is the easiest question in the contest. You just have to count the number of cakewalk problems and the number of hard problem .

So let hcnt be count of hard problems and ccnt be the count of cakewalk problems . So for each input denoting the participants solving the problem :

if(input_number>=p/2)

ccnt++;

if(input_number<=p/10)

hcnt++;

and then

if(ccnt==1 && hcnt==2)

printf("Yes");

else

printf("No");

Link to my solution : https://www.codechef.com/viewsolution/15624434

Max Mex

I this problem you have to find the minimum element in the complement set of the set we are given with respect to a universal set of whole numbers .

The additional task is to find the maximum of such value that are possible after adding some k numbers to the set.

For example if the set is {1,2,3}.

The MEX is 0

Let k=1

We have to find the maximum value of MEX. So we can add 0 to the set such that the value of MEX becomes 4 which is the largest possible with this set and k .

So what we can do is to fill the holes (i.e. the numbers which are not there in the set) so that when no holes can be filled , the recent hole will be our mex because if then we make the complement of the number , it will be the least value and hence the MEX.

The approach that I used is to map the numbers of the set that we are given using hash table .

(1) First of all , I found the maximum value in the set and side by side kept on mapping all the input values to 1 .

(2) Then I ran a loop from i=0 to i<=max.

(3) In the loop , I we have to check if the number is present in the set . If it is not present and k>0 (We still have number left to fill) , we do k-- signifying that we have added the element .

(4) Meanwhile if k becomes 0 in the loop , we break from the loop , and the last element which was not present is our answer. We signify this by making flag=1;

(5) If outside the loop , flag==0 , our answer will be

ans=max+k+1;

where the k value is the value which remains after the loop. It is because , the remaining k value will fill all the holes in the set continuously upto mx + k . Then we add 1 to get first hole that is left now .

Link to my Solution : https://www.codechef.com/viewsolution/15625843

Counter Test For CHEFSUM

I request you to first solve the CHEFSUM problem .The answer is the minimum value in the array .

In this problem we have to somehow overflow the value of integer variable . The unsigned int data type has a size of 4 bytes or 32 bit . Its range is from 0 to 2^32 -1 i.e.0 to 4294967295 . Its solution can highly vary from person to person .

One observation that has to be made in this question is to look into constraints given in the subtasks. There , the range of n is

99991 <= n <=100000

So actually we have to devise the test cases for only 10 values of n . And also for full score,

1 ≤ ai ≤ 10^5

So what I did was to divide the maximum value of int by n and print it n number of times with minor adjustment to satisfy the constraints on ai.

Link : https://www.codechef.com/viewsolution/15649810

Chef and a great voluntary Program

In this problem , we have to divide the groups of apples and bananas / individual apples and bananas in such a way that no more than x number of apples are there copnsecutively and no more than y number of bananas are there consecutively.

If the group of one type still remain even after mixing of groups , we can break the previous groups that have their condition satisfied to be used as new separators for the other group .

If one type of group still are separated enough , we have to pace kiwis in between .

(1) So first , we have to determine the number of groups of bananas and apples . For that , we first have to count the number of apples and bananas(i.e. a and b) in the string.

The groups of a are :

//cnta is the count of number of apples

if(cnta%x==0)

ga=cnta/x;

else

ga=cnta/x+1;

Similarly goes for b.

Then we go on placing the fruit having less number of groups as the separator.

Until the number of groups of both the fruits are same , we keep on placing single character of the separator fruit between the groups of fruit having higher number of groups and keep on decreasing the number of each type of fruit.We calculate the number of groups in each loop after placing the characetrs of both type.

If then aslo , characters remain, we print them in groups alternatively .

If then also remains some group , we print each group followed by a star .

Link to my solution : https://www.codechef.com/viewsolution/15679608

Chef and Cycled Cycles

At first site , the problem looks like that of a typical graph problem but with careful observation , one can see that the problem is way simpler .

(1) For each cycle , determine the shortest path that one has to travel to cross that cycle . For it use the points , one in which the previous cycle connects to and the other in which this cycle connects tothe next cycle . It can have only two paths , one clockwise and one counter clockwise and we have to take the minimum path .

You can save this in an array with its ith element is the summation of paths upto that cycle plus it own path inside the cycle .This way you will save on calculation again and again .

(2) You can also make an array which sums up the path of intermediate connections between the cycles.

(3) Now for each query , do the following :

(i) Calculate the shortest path between cycles , not considering the cycles which are the end points.

(ii) In the first cycle c1, calculate the shortest path from v1 , to the point which connect it to other cycle. Similarly calculate the path from the v2 in c2 to the point which connects it to previous cycle .

There are two cases of above step , one path can be clockwise and other anti-clockwise . You have to take the minimum of it .

Link : https://www.codechef.com/viewsolution/15847886


XORTREEH - Editorial

$
0
0

PROBLEM LINK

Practice
Contest

Author:Adarsh Kumar
Tester:Alexey Zayakin
Editorialist:Jakub Safin

DIFFICULTY

HARD

PREREQUISITIES

Fourier transform, number theoretic transform (NTT), fast (modular) exponentiation and modular inverse

PROBLEM

You're given an array $A$ of $N$ non-negative integers and integers $K,X$.

Define the XOR of two numbers $a\oplus b = c$ in base $K$ this way: the $d$-th digit of $c$ in base $K$ is $c_d=a_d+b_d$ modulo $K$.

Compute the probabilities $p_i$ that the base $K$ XOR of mex values of $X$ randomly selected subsequences of $A$ is equal to $i$, for all possible $i \ge 0$, modulo 330301441.

QUICK EXPLANATION

The mex won't be too large, XORs won't be too large either. Compute the probabilities for getting all possible mex-s of one subsequence. The probabilities for XOR of $X$ subsequences can be computed using slow multidimensional number theoretic transform, where each dimension is a digit and array size is $K$ instead of a power of $2$, combined with fast exponentiation.

EXPLANATION

The problem statement mentions the answer should be a complicated sum $\sum (i^2 p_i^3)^i$ over all $p_i > 0$ (obviously, the terms with $p_i=0$ don't affect the sum since that only happens with $i > 0$). However, that's not important. The sum is just a hash value that's there to avoid having to print large numbers and we can compute it after finding all $p_i$. Let's just mention that the $i$-th powers can be computed using fast exponentiation.

Since we need to compute the result modulo $MOD$, we need to work with fractions (all $p_i$ will be rational numbers) as their equivalents modulo -- dividing by $Q$ corresponds to multiplying by its modular inverse. Since the given modulo is a prime, the inverse $Q^{-1}=Q^{MOD-2}$ according to Fermat's little theorem, which can be computed using the above mentioned fast exponentiation.

Which values of $i$ give non-zero $p_i$? Obviously, the mex of an array of size $N$ can't be more than $N$, since the opposite would require all integers between $0$ and $N$ to be present in $A$. We can interpret numbers $\le N$ in base $K$ as numbers with at most $D$ digits, or exactly $D$ digits including leading zeroes, where $D=\left\lceil \log_K N \right\rceil$. Xor-ing $D$-digit numbers gives a $D$-digit number again, so the xor of $X$ numbers is $< K^D$. Therefore, it's sufficient to compute $p_i$ only for $i < K^D$, which makes $O(KN)$ numbers. That's not too much.

From mex to probabilities

Let's find the probabilities $P_1(i)$ that the mex of a random subsequence of $A$ will be equal to $i$, e.g. the probabilities $p_i$ if $X=1$. As mentioned above, we can limit ourselves to $i \le N$.

We can compute just the number of subsequences $S_1(i)$ that give mex equal to $i$ and then normalise those values -- divide them by $\sum S_1(i)$, or rather multiply by its multiplicative inverse -- to get $P_1(i)$.

If the mex of some subsequence is $i$, then all elements $A_j=i$ can't be in the subsequence. Any of the elements $A_j > i$ can be in there, but it doesn't matter; if there are $g$ such elements, that gives $2^g$ possibilities. Finally, for any $0 \le k < i$, there must be at least one element $A_j=k$ present in the subsequence. If there are $s_k$ such elements for a given $k$, then there are $2^{s_k}-1$ ways to choose them (any non-empty subset). We can express

$$S_1(i) = 2^g \prod_{k=0}^i \left(2^{s_k}-1\right) = \prod_{k=i+1}^{A_{max}} 2^{s_k} \prod_{k=0}^i \left(2^{s_k}-1\right)\,,$$

where $A_{max}$ is the maximum element in $A$, since $g$ is just the sum of $s_k$ for $k > i$.

Using fast exponentiation, we can precompute all $s_k$, $2^{s_k}$, their suffix products and prefix products of $2^{s_k}-1$ (similarly to prefix sums) and compute $S_1(i)$ and $P_1(i)$ for all $i \le N$ using the given formula in $O(A_{max}+N\log N)$; it doesn't even need to depend on $A_{max}$ if we notice that since the mex can't be greater than $N$, setting $A_i := min(N+1,A_i)$ doesn't affect the result.

This approach is fast with only $O(K^D)=O(KN)$ time complexity.

Walsh-Hadamard transform

Look at the straightforward way to compute probabilities $P_2(i)$ for $X=K=2$ from $P_1(i)$:

$$P_2(i) = \sum_{j=0}^N P_1(j) P_1(i\oplus j)\,.$$

It's very similar to convolution of two arrays, the only difference is that we're using $\oplus$ instead of $+$. The convolution of 2 arrays can be computed using fast Fourier transform by computing the FFT of both arrays extended to size $2^k \ge 2N$, multiplying their corresponding elements and computing the inverse FFT of the resulting array; for this xor-convolution, it's very similar, but we're using something called fast Walsh-Hadamard transform instead. You can read about it here.

For general $X$, the fast way to compute all $P_X(i)$ is to compute the Walsh-Hadamard transform $WH\lbrack P_1\rbrack(\nu)$, take $B(\nu)=WH\lbrack P_1\rbrack^X(\nu)$ (using fast exponentiation) and compute $P_X(i)$ as the inverse Walsh-Hadamard transform $P_X=WH^{-1}\lbrack B\rbrack$. However, this only works for $K=2$, where the conventional xor is defined.

The following is actually a generalised version of WHT for arbitrary $K \ge 2$.

A better approach: number theoretic transform

This approach uses the specific value of $MOD$. If we compute small factors of $MOD-1$, we can see that all numbers from $2$ to $10$ -- all possible $K$ -- divide it! That means we can use the number theoretic transform, which allows us to treat the base-$K$ xor as what it actually is: summation modulo $K$.

On the other hand, we're going to need the multidimensional version. We can look at an index $i$ as a vector of $D$ digits $(i_1,\dots,i_D)$; the xor of 2 vectors is actually just their vector sum and then taking the remainder mod $K$ in each digit. The formula for $P_2(i)$ then becomes $\sum_j P_1(j_1,\dots,j_D) P_1((i_1-j_1)\%K,\dots,(i_D-j_D)\%K)$, which is just multidimensional convolution with indices modulo $K$.

So how do we do convolution with indices modulo $K$? We don't need to do anything -- turns out convolution using DFT or NTT is already done with indices modulo array size! That's why we need the trick with padding the array with zeroes at the end to at least twice the size (it's a power of 2 just so that it'd run fast): when we're doing $C_{i+j} += A_i B_j$, we're only adding a non-zero number if $i+j < 2N$, so taking it modulo $2N$ does nothing.

The reason why this happens is apparent if we look at how DFT or NTT works. For an array of size $N$, we choose a number $w$ such that $w^k \neq 1$ for $0 < k < N$ and $w^N=1$ and compute $F\lbrack A\rbrack(j) = \sum w^{jk} A_k$ for each $0 \le j < N$. The inverse transformation uses $1/w$ instead of $w$. DFT uses $w=e^{2\pi i / N}$; for NTT, it's a so-called primitive root -- a number for which the required conditions hold modulo $MOD$. There's no easy way to pick a primitive root, but since we have a fixed modulo and array sizes $N$ ($N=K$ here) are small, we can compute them e.g. by bruteforcing locally for all possible values of $K$ and hardwire them into the code.

Anyway, since $w^N=1$, there's no difference in what we're computing if we take some index modulo $N$. We can run DFT or NTT to get $F\lbrack P_1\rbrack$ without increasing its size to a power of $2$, then take $F\lbrack P_X\rbrack(j)=F\lbrack P_1\rbrack(j)^X$ for each $j$ and finally compute $P_X$ using an inverse transform and it gives us the probabilities we need.

There's a small drawback here: we're transforming arrays of size $K$, so there's no way to use the "butterfly scheme" of classic FFT. However, we don't need that. In order to stay in integers and get the required precision, we're going to use multidimensional NTT modulo $MOD$ by simply computing the required sums directly. Small array sizes help a lot, since this bruteforce approach runs in $O(K)$ per element per dimension.

We have $O(K^D)=O(KN)$ elements (the former bound is tighter and we need to work with fixed $D$ anyway, so let's use that) in $P_1$ and $P_X$, so each NTT (direct and reverse) runs in $O(K^{D+1}D)$. Between them, there are $O(K^D)$ fast exponentiations in $O(\log MOD)$. The total time complexity is therefore $O(K^D(KD+\log{MOD}))$; memory complexity $O(K^D)$.

AUTHOR'S AND TESTER'S SOLUTIONS

Setter's solution
Tester's solution

Unofficial Editorials October Long Challenge (Part 1)

$
0
0

Hello Guys

This is the part 1 of three posts of unofficial editorials for October Long Challenge.

For Solutions of problems CHEFGP and/or MARRAYS, click here.

For Solutions of problems CHEFCCYL and/or SHTARR, click here

This post has editorials for problems PERFCONT, MEX and CHEFCOUN

Note: Problems are explained for newbie/beginner level. In case you find it too basic, feel free to skip paragraphs of explanation and just read the bold text.. :)

Problem PERFCONT

Problem Difficulty : Cakewalk

Problem Explanation

Given number of problems ans number of participants who solved it, check contest overall difficulty as balanced or not

If p denote the number of participants who solved the problem, A contest is balanced if

p >= P/2 (Integer division) for exactly two problems AND p<=P/10 (Integer division) for exactly one problem.

Solution

The problem Explanation provides a direct solution. Use counter variables easy and hard, denoting Simply Check for every problem

if first condition satisfied, easy++;

if second condition satisfied, hard++;

In the end, if easy == 1 && hard == 2 print "yes" else print "no"

Link to my Solution here

Problem MEX

Problem Dificulty:Simple

Problem Explanation

Mex is the smallest non-positive integer which is not present in given set.

Given a set, find the Maximum value of MEX of given set, after inserting any K values into set..

Solution

This problem needs one observation

consider set {0, 1, 2, 2} and K = 2

Mex of given set = 3

Insert 3 into set, set become {0,1,2,2,3}, mex = 4

Insert 4 into set, set become {0, 1, 2, 2, 3, 4} mex = 5

From this, it follows that if at every step, Mex of existing set is inserting into set, we will get maximum value of Mex after inserting K values.

Perhaps the simplest solution,

  1. Declare a boolean array (for c++ users, int array) present of size upto 10^6 (2*10^5+1 will also suffice here, reason explained below)

  2. For every value x present in array, set present[x] = true (present[x] = 1 for c++ users)

  3. Run a loop i = 0 to 10^6, if(!present[x] && K > 0)K--; else if(!present[x] && K==0)answer = x; break;

  4. print answer;

A link to my code here

Note: for this problem, many other (efficient) solutions are also possible, but this one i found easiest.

Problem CHEFCOUN

Problem difficulty:Easy

Problem Explanation

Given an incorrect solution of problem CHEFSUM, generate a test case with an array containing given number of elements N (99991<=n<=1000000)

Solution

In this problem, the inexpert coder fails to take into account that prefsum(i)+sufsum(i) may not fit integer range.

Integer overflow is a scenario where the resultant number arising from addition, subtraction and other arithmetic operations, exceeds the maximum range of Integer data type.

The Maximum range of unsigned int is 2^32-1 = 4294967296-1 (11111111111111111111111111111111 in binary)

whenever an unsigned int exceeds this value, only the last 32 bits are stored.

For Example 4294967298 has binary representation 100000000000000000000000000000010

If we try to store this value in unsigned int, it will be stored as 00000000000000000000000000000010 that is 2.

Notice that 4294967298 remainder 4294967296 = 2

let MOD = 2^32 = 4294967296

So the code snippet given in problem return the index in array where (prefsum(i)+suffsum(i))%MOD is minimum.

Actual Solution to CHEFSUM is the first index of minimum value in the given array

So we can construct any array such that

index where (prefsum(i)+suffsum(i))%MOD is minimum != smallest index where the number is minimum value of array.

I used a trick in my solution. I set 4294967296 - a[0] as total sum of array. X is chosen to be 43000 (Any value above 4294967296 / N would solve the problem).

long Total = 4294967296 - 43000-43000; //Total denote sum of array excluding first element)

a[0] = 43000

Rest array is generated as for i = 1 to N-1{

a[i] = Total/(N-i)

Total -= a[i];

}

This way, almost all values are assigned value 42948 to 42954 (Depending on value of N).

Correct answer for aboce generated array is 2, but incorrect code will give answer 1

Because, PrefSum(i)+Suffsum(i) = Sum of whole array + a[i]

prefSum(0)+sufsum(0) = (4294881296 - 43000) + 43000 = 4294881296 = 0 (mod 4294881296)

Consequently, incorrect code return 1 while correct answer is 2.

Here's a link to my code

43000 was chosen because if any number smaller than 42948 is chosen, it is likely that it it actually the correct answer of the problem. So the incorrect code will actually get the answer correct despite wrong solution, Something Chef doesn't want to happen :)

PS: I liked this problem a lot. It was completely an out-of-box problem... Thanks to codechef problem setters.

Next problems are discussed in Unofficial Editorial Part 2.

Keep Coding. Feel free to ask anything.

CHEFCCYL - Editorial

$
0
0

PROBLEM LINK

Practice
Contest

Author:Dmytro Berezin
Tester:Alexey Zayakin
Editorialist:Jakub Safin

DIFFICULTY

MEDIUM

PREREQUISITIES

prefix sums

PROBLEM

You're given a weighted graph formed by disjoint cycles connected cyclically by $N$ edges. Answer $Q$ queries; in each query, compute the length of the shortest path between two vertices.

QUICK EXPLANATION

Use prefix sums to compute min. distances in cycles. Precompute distances between vertices connected to neighboring cycles and their prefix sums. Compute the distances for each possible direction in the outer cycle using that.

EXPLANATION

Yo dawg, I heard you like cycles so I put cycles in a cycle...

Our graph is formed by taking the "outer" cycle and replacing each of its vertices by one of the "inner" cycles. Any path between two vertices in different inner cycles then corresponds to a path in the outer cycle.

If we forget about moving through edges of the inner cycles, then there are just 2 possible paths in the outer cycle. We can compute the minimum distance (including the inner cycles' edges, of course) for both of them and take the minimum.

Computing distances in a cycle

Let's take a cycle with $N$ vertices and edges, where the edge between $i$ and $i\%N+1$ has length $w_i$. The fastest way to compute the two distances between vertices $v_1,v_2$ (when moving clockwise and counter-clockwise) uses prefix sums. Let's take $v_1 \le v_2$ and compute the prefix sums $W_i=\sum_{j=1}^i w_i$ (with usual $w_0=0$); one distance between $v_1$ and $v_2$ is $d_1=\sum_{j=v_1}^{v_2-1} w_j = W_{v_2-1}-W_{v_1-1}$ and the other must be $d_2=W_N-d_1$, since the two paths between $v_1$ and $v_2$ together cover all $N$ edges.

The prefix sums can be precomputed in $O(N)$ and then we can compute the distances for any pair of vertices in $O(1)$.

Distances in inner cycles

When we compute the prefix sums of the outer cycle, it's easy to find the distances between vertices in cycles $c_1$ and $c_2$ (since distances don't depend on vertices' order, $c_1 < c_2$) that comes from the outer edges. For the path from $(c_1,u_1)$ to $(c_2,u_2)$ with distance denoted above as $d_1$, we need to cross inner cycles numbered $c_1..c_2$, so the path will move from $u_1$ to vertex $v_1$ (in the notation used for the outer edges in the input) of cycle $c_1$, then to vertex $v_2$ of cycle $c_1+1$, then to vertex $v_1$ of $c_1+1$, etc., until vertex $v_2$ of $c_2$ and from there to $u_2$.

That means we need to add to the distance $d_1$ from the outer cycle the minimum distances between $u_1$ and $v_1(c_1)$, between $v_2(j)$ and $v_1(j)$ for every cycle $c_1 < j < c_2$, and between $v_2(c_2)$ and $u_2$.

If we do the same thing with prefix sums for each inner cycle, we can compute first and last of these min. distances easily. The other min. distances are a problem, since there's a lot of them. However, they don't depend on $u_1$ or $u_2$, just $c_1$ and $c_2$, so we can use some more precomputation.

Let's denote by $X_i$ the minimum distance between $v_1(i)$ and $v_2(i)$ in the $i$-th inner cycle; we can compute these $N$ numbers easily either by brute force or using the same prefix sums we've built for inner cycles. Now we need to compute $\sum_{i=c_1+1}^{c_2-1} X_i$ - but that can be done exactly the same way as the distances in cycles! We just need to build the array of prefix sums of $X_{1..N}$; if the $i$-th prefix sum is $S_i$ (with $S_0=0$ again), then our sum is $S_{c_2-1}-S_{c_1}$.

The other path in the outer cycle (denoted above as $d_2$) can be computed in a similar way. We're moving from $u_1$ through $v_2(c_1)$, $v_1(c_1-1)$, $v_2(c_1-1)$... $v_2(1)$, $v_1(N)$... $v_1(c_2)$ to $u_2$. Again, the paths $u_1$ to $v_2(c_1)$ and $v_1(c_2)$ to $u_2$ can be found easily; the rest corresponds to $\sum_{i=1}^{c_1-1} X_i + \sum_{i=c_2+1}^N X_i$, which can be rewritten using prefix sums as $S_N-S_{c_2}+S_{c_1-1}$.

The time and memory complexity of this algorithm is linear in input size, since precomputing prefix sums is linear and each query can be processed in $O(1)$.

AUTHOR'S AND TESTER'S SOLUTIONS

Setter's solution
Tester's solution
Editorialist's solution

SHTARR - Editorial

$
0
0

PROBLEM LINK

Practice
Contest

Author:Denis Anischenko
Tester:Alexey Zayakin
Editorialist:Jakub Safin

DIFFICULTY

MEDIUM-HARD

PREREQUISITIES

segment trees

PROBLEM

You're given an array $A$ of $N$ integers and $Q$ queries of two types:

  • add $x$ to $A\lbrack i\rbrack $
  • for given $i,L,R$, compute the number of indices $j$ such that there's a ray from $\left(i-\frac12,k-\frac12\right)$ to $\left(\infty,k-\frac12\right)$ for some $L \le k \le R$ that intersects the line segment from $(j,0)$ to $(j,A\lbrack j\rbrack)$ and doesn't intersect any such segment for any smaller $j$

Answer the queries of the second type.

QUICK EXPLANATION

Reduce the problem to counting numbers greater than any previous number, stopping at the first number $\ge R$. Use a segment tree computing max(); in the same segment tree, compute the answers to queries of type 2 for each segment. Use that to answer queries in $O(\log^2 N)$.

EXPLANATION

Geometry is bad. Let's reformulate this problem without geometry.

We're looking for indices $j \ge i$. If the ray at height $k-1/2$ intersects some segment with height $A\lbrack j\rbrack$, then (all elements of $A$ are integers) $A\lbrack j\rbrack \ge k$. That means we need to count indices $j \ge i$ such that $A\lbrack j\rbrack \ge L$, $A\lbrack j\rbrack > A\lbrack l\rbrack$ for any $i \le l < j$ (otherwise any ray will intersect some earlier taller segment) and $\mathrm{max}(A\lbrack l\rbrack)$ for $i \le l < j$ is less than $R$ (all rays will stop at a segment of height $\ge R$).

The third condition can be handled easily. If we find the smallest $h \ge i$ such that $A_h \ge R$, then we can ignore segments $A_j$ for $j > h$.

The first condition can be handled easily too - we can find the smallest index $s \ge i$ such that $A_s \ge L$, then we can ignore segments $A_j$ for $j < s$ (because they're shorter than $L$) and count segments in the range $\lbrack s,h\rbrack$ that satisfy the second condition; thanks to $A_s \ge L$, they must satisfy the first condition too (the third condition is automatically satisfied by $j \le h$).

We're left with the problem of counting $A_j \in \lbrack s,h\rbrack$ such that $A_k < A_j$ for all $s \le k < j$. It turns out we can do that in the same way as finding $s$, and even at the same time: using a segment tree.

In each node of this segment tree, we should store the maximum in the node's segment and the answer to the problem - number of elements which are larger than any earlier element in that segment.

Finding $h$

The "maximum" part of our segment tree will suffice for this. How do we find the smallest $j \ge i$ such that $A_j \ge R$ (or $j=N$ if there's no such index) within the segment of some node $n$? If $n$ contains just 1 element, this is trivial; otherwise, it has 2 sons.

If the maximum is $< R$ or all indices $j \ge i$ are outside of this node's segment, the answer is definitely "nothing". Otherwise, let's ask the left son the same question and look at the answer.

If it returns some index, we don't need to bother with the right son -- we've got what we need. If it returns "nothing", we have to ask the right son.

The time complexity of computing this is $O(\log{N})$ even if it might not seem so at first. That's because the first time we asked the left son, which contained some indices $\ge i$, got "nothing" and branched to the right son, we won't need any branching anymore. If we move to the left son, then the left son has to contain something $\ge R$; otherwise, we can just move to the right son instantly. It can be viewed as finding $O(\log{N})$ segments which cover $[i,N)$, then picking the first one with maximum $\ge R$ and binary-searching inside it.

If we do this, we've got $h$, can run the rest on $\lbrack i,h\rbrack$ and forget about $R$ forever.

Answering a subquery

Let's say that we need to know the answer for a segment $[z,k)$ in some node $n$ with given $L$. The stored answer for this segment (with $L=-\infty$) is $a$. If $n$ contains just 1 element, this is trivial; otherwise, it has 2 sons whose answers are $a_l,a_r$. Let's look at the maximum from the left son.

If it's $\le L$, we can safely ignore this son and simply recurse to the right son.

Otherwise, we can recurse to the left son and get its part of the answer; since its maximum is $> L$, the right son's part doesn't depend on $L$ (only on the left son's maximum), so we know what it is: $a-a_l$ (watch out, not $a_r$).

We have straightforward non-branching recursion here, so the time complexity is $O(\log{N})$.

Completing the algorithm

How to answer a query of type 2 with given $[z,k)$ (initially $z=i,k=h+1$) and $L$ if we're in some node $n$? If $n$ covers exactly the segment $\lbrack i,h\rbrack$, that's just running the above mentioned subquery. Cases where $n$ contains one element or doesn't cover anything from $\lbrack i,h\rbrack $ are trivial. Otherwise, let's recurse into the sons of the current node. A son that covers a segment $S$ will get $[z,k)$ modified to $[z,k)\cap S$, the right son will get $L$ increased to $\mathrm{max}(L,m_l)$ where $m_l$ is the maximum from the left son, but that's it. Again, this visits $O(\log{N})$ nodes which cover the original segment $\lbrack i,h\rbrack$.

We still need to do with type 1 queries -- updating the tree. You should know by now how updating maxima works. In order to update the answer we store in any node (of which we again visit and change just $O(\log{N})$), we need to ask the subquery on its segment with $L=-1$. Note that we've got a separate method dealing with the hard stuff and everything else is just standard segment tree operations.

The total time complexity is $O(N\log{N}+Q\log^2{N})$ -- we need to compute those subqueries everytime we visit a node; the tree can be built by visiting each node just once and for each query, we visit $O(\log{N})$ nodes. As usual, the segment tree takes $O(N)$ memory.

Alternative slower solution

This type of problems can often be solved using sqrt decomposition with the general form: split queries into buckets of at most $K$ queries, before processing each bucket, split the array at indices which are going to change, do some preprocessing, use that to answer queries, update the array after processing the bucket.

In this case, we'll keep the indices which change (appear in queries of the first type) in the current bucket as completely separate. In order to process queries of the second type, we should find a way to speed up the process "start with $A_i$, keep jumping to the first larger number to the right, increment the answer if it's in the range $\lbrack L,R\rbrack$", since we can't afford to just make all these jumps.

For each chunk of the array $A$ that doesn't change, we can compute all its suffix and prefix maxima. When solving a query of the second type, we can now go through all $O(K)$ changing elements and unchanging chunks (subarrays); using the prefix maxima, we know where we encounter the first number $\ge R$ and can compute the exact index by binary search. Using suffix maxima, we can compute the maximum element we must have jumped to before processing each of them (we should start with that equal to $L-1$), which allows us to decide if the changing elements should increment the answer to this query. We can't do that with the unchanging chunks, but for each of them except maybe the first and last one (we'll deal with them separately), we know the $L$ we have when processing it and that we should add to the answer for this query all elements we jump to when starting at $L$ to the left of this chunk.

We can't answer these subqueries directly, but why not offline, at once for each chunk $C$? If we're starting to the left of this chunk, we can discard any element $C_i$ such that $C_j \ge C_i$ for some $j < i$. This can be done in a single pass in linear time and gives us an increasing sequence. Then, we have several queries $Q_j$, each of which needs the answer for some $L_j$. We need the queries sorted by $L_j$; then we can use a 2-pointer algorithm where we're going through the elements of that subsequence in increasing order and moving the second pointer to the query with the largest $L_j$ that leads to jumping on the current element.

How do we sort the queries? A standard $O(K\log{K})$ sort is too slow, we need it in $O(K)$. Fortunately, we don't need to sort separately for each chunk -- we can keep a sorted array of them, add/delete queries when appropriate (each is added or deleted just once) and when updating their $L$ to max (current $L$, maximum of the current chunk or changing element), that's just taking a segment of queries with old $L$ less than that maximum, giving them the same $L$ and moving them to the right place in the array to keep it sorted. Everything is $O(K)$ (pun intended).

We're left with the starting and final chunk for each query. We'll need to solve these offline as well; let's describe just what to do with starting chunks, since it's symmetric in both cases (plus there's the case when they're the same chunk, which is annoying but similar). This time, $Q_j$ has $L_j$ and the first index $i_j$.

We can afford to quicksort the queries by $i_j$ now. Let's add elements from the end to an array and use the well-known stack algorithm for maintaining the current increasing subsequence -- before adding any element, we remove $\le$ elements from the end of the array. If we've added all elements $\ge i_j$, we should find how many of them are $\ge L_j$ and add that to the answer for $Q_j$. That can be easily done by binary search, since the array is sorted, which makes $O(\log{N})$ per query.

What's the time complexity? For a fixed $K$, $O(QK+Q\log{N}+NQ/K)$; if we choose $K$ optimally around $\sqrt{N}$, that becomes $O(Q\sqrt{N})$. Memory complexity: $O(K^2+N+Q)=O(N+Q)$.

This approach is slower and requires dealing with more small stupid parts, but if you're fixated on solving this problem with sqrt decomposition or just don't know how to modify segment trees, it's possible.

AUTHOR'S AND TESTER'S SOLUTIONS

Setter's solution
Tester's solution

Please help me in MARRAYS

Viewing all 40121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>