Details
-
Type: Monitor
-
Status: Closed
-
Priority: Major
-
Resolution: Done
-
Affects Version/s: None
-
Fix Version/s: 2021
-
Component/s: FIWARE-TECH-HELP
-
Labels:
-
HD-Enabler:Cosmos
Description
Created question in FIWARE Q/A platform on 28-11-2015 at 19:11
Please, ANSWER this question AT https://stackoverflow.com/questions/33974754/how-can-i-read-and-transfer-chunks-of-file-with-hadoop-webhdfs
Question:
How can I Read and Transfer chunks of file with Hadoop WebHDFS?
Description:
I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend.
I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception:
Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466)
This is the actual code that generates the Exception:
RestTemplate restTemplate = new RestTemplate();
restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
HttpEntity<?> entity = new HttpEntity<>(headers);
UriComponentsBuilder builder =
UriComponentsBuilder.fromHttpUrl(hdfs_path)
.queryParam("op", "OPEN")
.queryParam("user.name", user_name);
ResponseEntity<byte[]> response =
restTemplate
.exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class);
FileOutputStream output = new FileOutputStream(new File(local_path));
IOUtils.write(response.getBody(), output);
output.close();
I think this is due to a transfer timeout on the Cosmos instance, so I tried to
send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file.
Thanks in advance.
Activity
Field | Original Value | New Value |
---|---|---|
Component/s | FIWARE-TECH-HELP [ 10278 ] |
Status | Open [ 1 ] | In Progress [ 3 ] |
Resolution | Done [ 10000 ] | |
Status | In Progress [ 3 ] | Closed [ 6 ] |
HD-Enabler | Cosmos [ 10872 ] | |
Description |
Created question in FIWARE Q/A platform on 28-11-2015 at 19:11 {color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/33974754/how-can-i-read-and-transfer-chunks-of-file-with-hadoop-webhdfs +Question:+ How can I Read and Transfer chunks of file with Hadoop WebHDFS? +Description:+ I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend. I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception: Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466) This is the actual code that generates the Exception: RestTemplate restTemplate = new RestTemplate(); restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory()); restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter()); HttpEntity<?> entity = new HttpEntity<>(headers); UriComponentsBuilder builder = UriComponentsBuilder.fromHttpUrl(hdfs_path) .queryParam("op", "OPEN") .queryParam("user.name", user_name); ResponseEntity<byte[]> response = restTemplate .exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class); FileOutputStream output = new FileOutputStream(new File(local_path)); IOUtils.write(response.getBody(), output); output.close(); I think this is due to a transfer timeout on the Cosmos instance, so I tried to send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file. Thanks in advance. |
Created question in FIWARE Q/A platform on 28-11-2015 at 19:11
{color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/33974754/how-can-i-read-and-transfer-chunks-of-file-with-hadoop-webhdfs +Question:+ How can I Read and Transfer chunks of file with Hadoop WebHDFS? +Description:+ I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend. I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception: Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466) This is the actual code that generates the Exception: RestTemplate restTemplate = new RestTemplate(); restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory()); restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter()); HttpEntity<?> entity = new HttpEntity<>(headers); UriComponentsBuilder builder = UriComponentsBuilder.fromHttpUrl(hdfs_path) .queryParam("op", "OPEN") .queryParam("user.name", user_name); ResponseEntity<byte[]> response = restTemplate .exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class); FileOutputStream output = new FileOutputStream(new File(local_path)); IOUtils.write(response.getBody(), output); output.close(); I think this is due to a transfer timeout on the Cosmos instance, so I tried to send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file. Thanks in advance. |
Assignee | Backlog Manager [ backlogmanager ] |
Fix Version/s | 2021 [ 12600 ] |
Transition | Time In Source Status | Execution Times | Last Executer | Last Execution Date | |||||
---|---|---|---|---|---|---|---|---|---|
|
2h 54m | 1 | Backlog Manager | 22/May/17 6:10 PM | |||||
|
3h | 1 | Backlog Manager | 22/May/17 9:10 PM |
2017-05-22 15:17|CREATED monitor | # answers= 1, accepted answer= True