java与hadoop序列化与反序列化的比较
发布日期:2021-06-29 12:30:37 浏览次数:2 分类:技术文章

本文共 3086 字,大约阅读时间需要 10 分钟。

在hadoop中,hadoop实现了一套自己的序列化框架,hadoop的序列化相对于JDK的序列化来说是比较简洁而且更节省存储空间。在集群中信息的传递主要就是靠这些序列化的字节序列来传递的所以更快速度更小的容量就变得非常地重要了。

先用java来看:

package hdfs;import java.io.Serializable;public class People implements Serializable {
private static final long serialVersionUID = 1L; private int age; private String name; public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getName() { return name; } public void setName(String name) { this.name = name; } public People(){} public People(int age, String name) { super(); this.age = age; this.name = name; }}
package hdfs;import java.io.ByteArrayOutputStream;import java.io.IOException;import java.io.ObjectOutputStream;public class TestJDKSeriable {
public static void main(String[] args) { try { ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos); oos.writeObject(new People(19, "zhangsan")); System.out.println("字节大小:"+baos.size()); oos.close(); baos.close(); } catch (IOException e) { e.printStackTrace(); } }}

得到结果:字节大小:81

再用hadoop来看:

package hdfs;import java.io.DataInput;import java.io.DataOutput;import java.io.IOException;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.io.WritableComparable;public class PeopleWritable implements WritableComparable
{ private IntWritable age; private Text name; public PeopleWritable(){ } public PeopleWritable(IntWritable age, Text name) { super(); this.age = age; this.name = name; } public IntWritable getAge() { return age; } public void setAge(IntWritable age) { this.age = age; } public Text getName() { return name; } public void setName(Text name) { this.name = name; } public void write(DataOutput out) throws IOException { age.write(out); name.write(out); } public void readFields(DataInput in) throws IOException { age.readFields(in); name.readFields(in); } public int compareTo(PeopleWritable o) { int cmp = age.compareTo(o.getAge()); if(0 !=cmp)return cmp; return name.compareTo(o.getName()); }}
package hdfs;import java.io.IOException;import org.apache.hadoop.io.DataOutputBuffer;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;public class TestHadoopSeriable {
public static void main(String[] args) { try { DataOutputBuffer dob = new DataOutputBuffer(); PeopleWritable pw = new PeopleWritable(new IntWritable(19), new Text("zhangsan")); pw.write(dob); System.out.println("字节大小:"+dob.getLength()); dob.close(); } catch (IOException e) { e.printStackTrace(); } }}

结果为:字节大小:13

由此可以看出同样的数据,在Jdk 序列化字节占用了89个,而在hadoop序列化中却只使用了13个字节。大大节省了空间和集群传输效率。

具体见:

转载地址:https://bupt-xbz.blog.csdn.net/article/details/79178175 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:MapReduce配置
下一篇:java序列化与反序列化

发表评论

最新留言

留言是一种美德,欢迎回访!
[***.207.175.100]2024年04月03日 01时53分38秒