SDP(5):ScalikeJDBC- JDBC-Engine:Streaming

时间:2022-10-08 15:19:14

作为一种通用的数据库编程引擎,用Streaming来应对海量数据的处理是必备功能。同样,我们还是通过一种Context传递产生流的要求。因为StreamingContext比较简单,而且还涉及到数据抽取函数extractor的传递,所以我们分开来定义:

case class JDBCQueryContext[M](
dbName: Symbol,
statement: String,
parameters: Seq[Any] = Nil,
fetchSize: Int = ,
autoCommit: Boolean = false,
queryTimeout: Option[Int] = None,
extractor: WrappedResultSet => M)

由于我们会将JDBCQueryContext传给JDBC-Engine去运算,所以Streaming函数的所有参数都必须明确定义,包括extractor函数。实际上JDBCQueryContext也完全满足了jdbcQueryResult函数。我们会在后面重新设计这个函数。JDBCStreaming函数产生一个akka-Source,如下:

def jdbcAkkaStream[A](ctx: JDBCQueryContext[A])
(implicit ec: ExecutionContextExecutor): Source[A,NotUsed] = { val publisher: DatabasePublisher[A] = NamedDB('h2) readOnlyStream {
val rawSql = new SQLToCollectionImpl[A, NoExtractor](ctx.statement, ctx.parameters)(noExtractor(""))
ctx.queryTimeout.foreach(rawSql.queryTimeout(_))
val sql: SQL[A, HasExtractor] = rawSql.map(ctx.extractor) sql.iterator
.withDBSessionForceAdjuster(session => {
session.connection.setAutoCommit(ctx.autoCommit)
session.fetchSize(ctx.fetchSize)
})
}
Source.fromPublisher[A](publisher)
}

我们只需要提供一个Sink就可以使用这个akka-stream了:

import akka.actor._
import akka.stream.scaladsl._
import akka.stream._
import scalikejdbc._
import configdbs._
import jdbccontext._
import JDBCEngine._ object JDBCStreaming extends App { implicit val actorSys = ActorSystem("actor-system")
implicit val ec = actorSys.dispatcher
implicit val mat = ActorMaterializer() ConfigDBsWithEnv("dev").setup('h2)
ConfigDBsWithEnv("dev").loadGlobalSettings() case class DataRow(year: String, state: String, county: String, value: String) //data row converter
val toRow = (rs: WrappedResultSet) => DataRow(
year = rs.string("REPORTYEAR"),
state = rs.string("STATENAME"),
county = rs.string("COUNTYNAME"),
value = rs.string("VALUE")
) //construct the context
val ctx = JDBCQueryContext[DataRow](
dbName = 'h2,
statement = "select * from AIRQM",
extractor = toRow
) //pass context to construct akka-source
val akkaSource = jdbcAkkaStream(ctx)
//a sink for display rows
val snk = Sink.foreach[(DataRow,Long)] { case (row,idx) =>
println(s"rec#: $idx - year: ${row.year} location: ${row.state},${row.county} value: ${row.value}")}
//can manual terminate stream by kill.shutdown
val kill: UniqueKillSwitch = (akkaSource.zipWithIndex).viaMat(KillSwitches.single)(Keep.right).to(snk).run scala.io.StdIn.readLine()
kill.shutdown()
actorSys.terminate()
println("+++++++++++++++") }

试运行结果OK。下面是新版本的jdbcQueryResult函数:

    def jdbcQueryResult[C[_] <: TraversableOnce[_], A](
ctx: JDBCQueryContext[A])(
implicit cbf: CanBuildFrom[Nothing, A, C[A]]): C[A] = { val rawSql = new SQLToCollectionImpl[A, NoExtractor](ctx.statement, ctx.parameters)(noExtractor(""))
ctx.queryTimeout.foreach(rawSql.queryTimeout(_))
rawSql.fetchSize(ctx.fetchSize)
implicit val session = NamedAutoSession(ctx.dbName)
val sql: SQL[A, HasExtractor] = rawSql.map(ctx.extractor)
sql.collection.apply[C]() }

试运行:

 object SlickDAO {
import slick.jdbc.H2Profile.api._ case class CountyModel(id: Int, name: String)
case class CountyTable(tag: Tag) extends Table[CountyModel](tag,"COUNTY") {
def id = column[Int]("ID",O.AutoInc,O.PrimaryKey)
def name = column[String]("NAME",O.Length())
def * = (id,name)<>(CountyModel.tupled,CountyModel.unapply)
}
val CountyQuery = TableQuery[CountyTable]
val filter = "Kansas"
val qry = CountyQuery.filter {_.name.toUpperCase like s"%${filter.toUpperCase}%"}
val statement = qry.result.statements.head
}
import SlickDAO._ def toRow: WrappedResultSet => CountyModel = rs =>
CountyModel(id=rs.int("id"),name=rs.string("name"))
//construct the context
val slickCtx = JDBCQueryContext[CountyModel](
dbName = 'h2,
statement = "select * from county where id > ? and id < ?",
parameters = Seq(,),
extractor = toRow
) val vecCounty: Vector[CountyModel] = jdbcQueryResult[Vector,CountyModel](slickCtx)
vecCounty.foreach(r => println(s"${r.id},${r.name}"))

下面是本次讨论的示范源代码:

build.sbt

// Scala 2.10, 2.11, 2.12
libraryDependencies ++= Seq(
"org.scalikejdbc" %% "scalikejdbc" % "3.2.1",
"org.scalikejdbc" %% "scalikejdbc-test" % "3.2.1" % "test",
"org.scalikejdbc" %% "scalikejdbc-config" % "3.2.1",
"org.scalikejdbc" %% "scalikejdbc-streams" % "3.2.1",
"org.scalikejdbc" %% "scalikejdbc-joda-time" % "3.2.1",
"com.h2database" % "h2" % "1.4.196",
"mysql" % "mysql-connector-java" % "6.0.6",
"org.postgresql" % "postgresql" % "42.2.0",
"commons-dbcp" % "commons-dbcp" % "1.4",
"org.apache.tomcat" % "tomcat-jdbc" % "9.0.2",
"com.zaxxer" % "HikariCP" % "2.7.4",
"com.jolbox" % "bonecp" % "0.8.0.RELEASE",
"com.typesafe.slick" %% "slick" % "3.2.1",
"ch.qos.logback" % "logback-classic" % "1.2.3",
"com.typesafe.akka" %% "akka-actor" % "2.5.4",
"com.typesafe.akka" %% "akka-stream" % "2.5.4"
)

resources/application.conf

# JDBC settings
test {
db {
h2 {
driver = "org.h2.Driver"
url = "jdbc:h2:tcp://localhost/~/slickdemo"
user = ""
password = ""
poolInitialSize =
poolMaxSize =
poolConnectionTimeoutMillis =
poolValidationQuery = "select 1 as one"
poolFactoryName = "commons-dbcp2"
}
} db.mysql.driver = "com.mysql.cj.jdbc.Driver"
db.mysql.url = "jdbc:mysql://localhost:3306/testdb"
db.mysql.user = "root"
db.mysql.password = ""
db.mysql.poolInitialSize =
db.mysql.poolMaxSize =
db.mysql.poolConnectionTimeoutMillis =
db.mysql.poolValidationQuery = "select 1 as one"
db.mysql.poolFactoryName = "bonecp" # scallikejdbc Global settings
scalikejdbc.global.loggingSQLAndTime.enabled = true
scalikejdbc.global.loggingSQLAndTime.logLevel = info
scalikejdbc.global.loggingSQLAndTime.warningEnabled = true
scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis =
scalikejdbc.global.loggingSQLAndTime.warningLogLevel = warn
scalikejdbc.global.loggingSQLAndTime.singleLineMode = false
scalikejdbc.global.loggingSQLAndTime.printUnprocessedStackTrace = false
scalikejdbc.global.loggingSQLAndTime.stackTraceDepth =
}
dev {
db {
h2 {
driver = "org.h2.Driver"
url = "jdbc:h2:tcp://localhost/~/slickdemo"
user = ""
password = ""
poolFactoryName = "hikaricp"
numThreads =
maxConnections =
minConnections =
keepAliveConnection = true
}
mysql {
driver = "com.mysql.cj.jdbc.Driver"
url = "jdbc:mysql://localhost:3306/testdb"
user = "root"
password = ""
poolInitialSize =
poolMaxSize =
poolConnectionTimeoutMillis =
poolValidationQuery = "select 1 as one"
poolFactoryName = "bonecp" }
postgres {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/testdb"
user = "root"
password = ""
poolFactoryName = "hikaricp"
numThreads =
maxConnections =
minConnections =
keepAliveConnection = true
}
}
# scallikejdbc Global settings
scalikejdbc.global.loggingSQLAndTime.enabled = true
scalikejdbc.global.loggingSQLAndTime.logLevel = info
scalikejdbc.global.loggingSQLAndTime.warningEnabled = true
scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis =
scalikejdbc.global.loggingSQLAndTime.warningLogLevel = warn
scalikejdbc.global.loggingSQLAndTime.singleLineMode = false
scalikejdbc.global.loggingSQLAndTime.printUnprocessedStackTrace = false
scalikejdbc.global.loggingSQLAndTime.stackTraceDepth =
}

JDBCEngine.scala

package jdbccontext
import java.sql.PreparedStatement import scala.collection.generic.CanBuildFrom
import akka.stream.scaladsl._
import scalikejdbc._
import scalikejdbc.streams._
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer import scala.util._
import scalikejdbc.TxBoundary.Try._ import scala.concurrent.ExecutionContextExecutor object JDBCContext {
type SQLTYPE = Int
val SQL_SELECT: Int =
val SQL_EXEDDL=
val SQL_UPDATE =
val RETURN_GENERATED_KEYVALUE = true
val RETURN_UPDATED_COUNT = false } case class JDBCQueryContext[M](
dbName: Symbol,
statement: String,
parameters: Seq[Any] = Nil,
fetchSize: Int = ,
autoCommit: Boolean = false,
queryTimeout: Option[Int] = None,
extractor: WrappedResultSet => M) case class JDBCContext(
dbName: Symbol,
statements: Seq[String] = Nil,
parameters: Seq[Seq[Any]] = Nil,
fetchSize: Int = ,
queryTimeout: Option[Int] = None,
queryTags: Seq[String] = Nil,
sqlType: JDBCContext.SQLTYPE = JDBCContext.SQL_SELECT,
batch: Boolean = false,
returnGeneratedKey: Seq[Option[Any]] = Nil,
// no return: None, return by index: Some(1), by name: Some("id")
preAction: Option[PreparedStatement => Unit] = None,
postAction: Option[PreparedStatement => Unit] = None) { ctx => //helper functions def appendTag(tag: String): JDBCContext = ctx.copy(queryTags = ctx.queryTags :+ tag) def appendTags(tags: Seq[String]): JDBCContext = ctx.copy(queryTags = ctx.queryTags ++ tags) def setFetchSize(size: Int): JDBCContext = ctx.copy(fetchSize = size) def setQueryTimeout(time: Option[Int]): JDBCContext = ctx.copy(queryTimeout = time) def setPreAction(action: Option[PreparedStatement => Unit]): JDBCContext = {
if (ctx.sqlType == JDBCContext.SQL_UPDATE &&
!ctx.batch && ctx.statements.size == )
ctx.copy(preAction = action)
else
throw new IllegalStateException("JDBCContex setting error: preAction not supported!")
} def setPostAction(action: Option[PreparedStatement => Unit]): JDBCContext = {
if (ctx.sqlType == JDBCContext.SQL_UPDATE &&
!ctx.batch && ctx.statements.size == )
ctx.copy(postAction = action)
else
throw new IllegalStateException("JDBCContex setting error: preAction not supported!")
} def appendDDLCommand(_statement: String, _parameters: Any*): JDBCContext = {
if (ctx.sqlType == JDBCContext.SQL_EXEDDL) {
ctx.copy(
statements = ctx.statements ++ Seq(_statement),
parameters = ctx.parameters ++ Seq(Seq(_parameters))
)
} else
throw new IllegalStateException("JDBCContex setting error: option not supported!")
} def appendUpdateCommand(_returnGeneratedKey: Boolean, _statement: String, _parameters: Any*): JDBCContext = {
if (ctx.sqlType == JDBCContext.SQL_UPDATE && !ctx.batch) {
ctx.copy(
statements = ctx.statements ++ Seq(_statement),
parameters = ctx.parameters ++ Seq(_parameters),
returnGeneratedKey = ctx.returnGeneratedKey ++ (if (_returnGeneratedKey) Seq(Some()) else Seq(None))
)
} else
throw new IllegalStateException("JDBCContex setting error: option not supported!")
} def appendBatchParameters(_parameters: Any*): JDBCContext = {
if (ctx.sqlType != JDBCContext.SQL_UPDATE || !ctx.batch)
throw new IllegalStateException("JDBCContex setting error: batch parameters only supported for SQL_UPDATE and batch = true!") var matchParams = true
if (ctx.parameters != Nil)
if (ctx.parameters.head.size != _parameters.size)
matchParams = false
if (matchParams) {
ctx.copy(
parameters = ctx.parameters ++ Seq(_parameters)
)
} else
throw new IllegalStateException("JDBCContex setting error: batch command parameters not match!")
} def setBatchReturnGeneratedKeyOption(returnKey: Boolean): JDBCContext = {
if (ctx.sqlType != JDBCContext.SQL_UPDATE || !ctx.batch)
throw new IllegalStateException("JDBCContex setting error: only supported in batch update commands!")
ctx.copy(
returnGeneratedKey = if (returnKey) Seq(Some()) else Nil
)
} def setDDLCommand(_statement: String, _parameters: Any*): JDBCContext = {
ctx.copy(
statements = Seq(_statement),
parameters = Seq(_parameters),
sqlType = JDBCContext.SQL_EXEDDL,
batch = false
)
} def setUpdateCommand(_returnGeneratedKey: Boolean, _statement: String, _parameters: Any*): JDBCContext = {
ctx.copy(
statements = Seq(_statement),
parameters = Seq(_parameters),
returnGeneratedKey = if (_returnGeneratedKey) Seq(Some()) else Seq(None),
sqlType = JDBCContext.SQL_UPDATE,
batch = false
)
}
def setBatchCommand(_statement: String): JDBCContext = {
ctx.copy (
statements = Seq(_statement),
sqlType = JDBCContext.SQL_UPDATE,
batch = true
)
}
} object JDBCEngine { import JDBCContext._ private def noExtractor(message: String): WrappedResultSet => Nothing = { (rs: WrappedResultSet) =>
throw new IllegalStateException(message)
} def jdbcAkkaStream[A](ctx: JDBCQueryContext[A])
(implicit ec: ExecutionContextExecutor): Source[A,NotUsed] = { val publisher: DatabasePublisher[A] = NamedDB('h2) readOnlyStream {
val rawSql = new SQLToCollectionImpl[A, NoExtractor](ctx.statement, ctx.parameters)(noExtractor(""))
ctx.queryTimeout.foreach(rawSql.queryTimeout(_))
val sql: SQL[A, HasExtractor] = rawSql.map(ctx.extractor) sql.iterator
.withDBSessionForceAdjuster(session => {
session.connection.setAutoCommit(ctx.autoCommit)
session.fetchSize(ctx.fetchSize)
})
}
Source.fromPublisher[A](publisher)
} def jdbcQueryResult[C[_] <: TraversableOnce[_], A](
ctx: JDBCQueryContext[A])(
implicit cbf: CanBuildFrom[Nothing, A, C[A]]): C[A] = { val rawSql = new SQLToCollectionImpl[A, NoExtractor](ctx.statement, ctx.parameters)(noExtractor(""))
ctx.queryTimeout.foreach(rawSql.queryTimeout(_))
rawSql.fetchSize(ctx.fetchSize)
implicit val session = NamedAutoSession(ctx.dbName)
val sql: SQL[A, HasExtractor] = rawSql.map(ctx.extractor)
sql.collection.apply[C]() } def jdbcExcuteDDL(ctx: JDBCContext): Try[String] = {
if (ctx.sqlType != SQL_EXEDDL) {
Failure(new IllegalStateException("JDBCContex setting error: sqlType must be 'SQL_EXEDDL'!"))
}
else {
NamedDB(ctx.dbName) localTx { implicit session =>
Try {
ctx.statements.foreach { stm =>
val ddl = new SQLExecution(statement = stm, parameters = Nil)(
before = WrappedResultSet => {})(
after = WrappedResultSet => {}) ddl.apply()
}
"SQL_EXEDDL executed succesfully."
}
}
}
} def jdbcBatchUpdate[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
implicit cbf: CanBuildFrom[Nothing, Long, C[Long]]): Try[C[Long]] = {
if (ctx.statements == Nil)
throw new IllegalStateException("JDBCContex setting error: statements empty!")
if (ctx.sqlType != SQL_UPDATE) {
Failure(new IllegalStateException("JDBCContex setting error: sqlType must be 'SQL_UPDATE'!"))
}
else {
if (ctx.batch) {
if (noReturnKey(ctx)) {
val usql = SQL(ctx.statements.head)
.tags(ctx.queryTags: _*)
.batch(ctx.parameters: _*)
Try {
NamedDB(ctx.dbName) localTx { implicit session =>
ctx.queryTimeout.foreach(session.queryTimeout(_))
usql.apply[Seq]()
Seq.empty[Long].to[C]
}
}
} else {
val usql = new SQLBatchWithGeneratedKey(ctx.statements.head, ctx.parameters, ctx.queryTags)(None)
Try {
NamedDB(ctx.dbName) localTx { implicit session =>
ctx.queryTimeout.foreach(session.queryTimeout(_))
usql.apply[C]()
}
}
} } else {
Failure(new IllegalStateException("JDBCContex setting error: must set batch = true !"))
}
}
}
private def singleTxUpdateWithReturnKey[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
implicit cbf: CanBuildFrom[Nothing, Long, C[Long]]): Try[C[Long]] = {
val Some(key) :: xs = ctx.returnGeneratedKey
val params: Seq[Any] = ctx.parameters match {
case Nil => Nil
case p@_ => p.head
}
val usql = new SQLUpdateWithGeneratedKey(ctx.statements.head, params, ctx.queryTags)(key)
Try {
NamedDB(ctx.dbName) localTx { implicit session =>
session.fetchSize(ctx.fetchSize)
ctx.queryTimeout.foreach(session.queryTimeout(_))
val result = usql.apply()
Seq(result).to[C]
}
}
} private def singleTxUpdateNoReturnKey[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
implicit cbf: CanBuildFrom[Nothing, Long, C[Long]]): Try[C[Long]] = {
val params: Seq[Any] = ctx.parameters match {
case Nil => Nil
case p@_ => p.head
}
val before = ctx.preAction match {
case None => pstm: PreparedStatement => {}
case Some(f) => f
}
val after = ctx.postAction match {
case None => pstm: PreparedStatement => {}
case Some(f) => f
}
val usql = new SQLUpdate(ctx.statements.head,params,ctx.queryTags)(before)(after)
Try {
NamedDB(ctx.dbName) localTx {implicit session =>
session.fetchSize(ctx.fetchSize)
ctx.queryTimeout.foreach(session.queryTimeout(_))
val result = usql.apply()
Seq(result.toLong).to[C]
}
} } private def singleTxUpdate[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
implicit cbf: CanBuildFrom[Nothing, Long, C[Long]]): Try[C[Long]] = {
if (noReturnKey(ctx))
singleTxUpdateNoReturnKey(ctx)
else
singleTxUpdateWithReturnKey(ctx)
} private def noReturnKey(ctx: JDBCContext): Boolean = {
if (ctx.returnGeneratedKey != Nil) {
val k :: xs = ctx.returnGeneratedKey
k match {
case None => true
case Some(k) => false
}
} else true
} def noActon: PreparedStatement=>Unit = pstm => {} def multiTxUpdates[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
implicit cbf: CanBuildFrom[Nothing, Long, C[Long]]): Try[C[Long]] = {
Try {
NamedDB(ctx.dbName) localTx { implicit session =>
session.fetchSize(ctx.fetchSize)
ctx.queryTimeout.foreach(session.queryTimeout(_))
val keys: Seq[Option[Any]] = ctx.returnGeneratedKey match {
case Nil => Seq.fill(ctx.statements.size)(None)
case k@_ => k
}
val sqlcmd = ctx.statements zip ctx.parameters zip keys
val results = sqlcmd.map { case ((stm, param), key) =>
key match {
case None =>
new SQLUpdate(stm, param, Nil)(noActon)(noActon).apply().toLong
case Some(k) =>
new SQLUpdateWithGeneratedKey(stm, param, Nil)(k).apply().toLong
}
}
results.to[C]
}
}
} def jdbcTxUpdates[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
implicit cbf: CanBuildFrom[Nothing, Long, C[Long]]): Try[C[Long]] = {
if (ctx.statements == Nil)
throw new IllegalStateException("JDBCContex setting error: statements empty!")
if (ctx.sqlType != SQL_UPDATE) {
Failure(new IllegalStateException("JDBCContex setting error: sqlType must be 'SQL_UPDATE'!"))
}
else {
if (!ctx.batch) {
if (ctx.statements.size == )
singleTxUpdate(ctx)
else
multiTxUpdates(ctx)
} else
Failure(new IllegalStateException("JDBCContex setting error: must set batch = false !")) }
} }

JDBCQueryDemo.scala

import akka.actor._
import akka.stream.scaladsl._
import akka.stream._
import scalikejdbc._
import configdbs._
import jdbccontext._
import JDBCEngine._ object JDBCStreaming extends App { implicit val actorSys = ActorSystem("actor-system")
implicit val ec = actorSys.dispatcher
implicit val mat = ActorMaterializer() ConfigDBsWithEnv("dev").setup('h2)
ConfigDBsWithEnv("dev").loadGlobalSettings() case class DataRow(year: String, state: String, county: String, value: String) //data row converter
val toRow = (rs: WrappedResultSet) => DataRow(
year = rs.string("REPORTYEAR"),
state = rs.string("STATENAME"),
county = rs.string("COUNTYNAME"),
value = rs.string("VALUE")
) //construct the context
val ctx = JDBCQueryContext[DataRow](
dbName = 'h2,
statement = "select * from AIRQM",
extractor = toRow
) //pass context to construct akka-source
val akkaSource = jdbcAkkaStream(ctx)
//a sink for display rows
val snk = Sink.foreach[(DataRow,Long)] { case (row,idx) =>
println(s"rec#: $idx - year: ${row.year} location: ${row.state},${row.county} value: ${row.value}")}
//can manual terminate stream by kill.shutdown
val kill: UniqueKillSwitch = (akkaSource.zipWithIndex).viaMat(KillSwitches.single)(Keep.right).to(snk).run scala.io.StdIn.readLine()
kill.shutdown()
actorSys.terminate()
println("+++++++++++++++") object SlickDAO {
import slick.jdbc.H2Profile.api._ case class CountyModel(id: Int, name: String)
case class CountyTable(tag: Tag) extends Table[CountyModel](tag,"COUNTY") {
def id = column[Int]("ID",O.AutoInc,O.PrimaryKey)
def name = column[String]("NAME",O.Length())
def * = (id,name)<>(CountyModel.tupled,CountyModel.unapply)
}
val CountyQuery = TableQuery[CountyTable]
val filter = "Kansas"
val qry = CountyQuery.filter {_.name.toUpperCase like s"%${filter.toUpperCase}%"}
val statement = qry.result.statements.head
}
import SlickDAO._ def toCounty: WrappedResultSet => CountyModel = rs =>
CountyModel(id=rs.int("id"),name=rs.string("name"))
//construct the context
val slickCtx = JDBCQueryContext[CountyModel](
dbName = 'h2,
statement = "select * from county where id > ? and id < ?",
parameters = Seq(,),
extractor = toCounty
) val vecCounty: Vector[CountyModel] = jdbcQueryResult[Vector,CountyModel](slickCtx)
vecCounty.foreach(r => println(s"${r.id},${r.name}")) }

SDP(5):ScalikeJDBC- JDBC-Engine:Streaming的更多相关文章

  1. java中文乱码解决之道(二)-----字符编码详解:基础知识 &plus; ASCII &plus; GB&ast;&ast;

    在上篇博文(java中文乱码解决之道(一)-----认识字符集)中,LZ简单介绍了主流的字符编码,对各种编码都是点到为止,以下LZ将详细阐述字符集.字符编码等基础知识和ASCII.GB的详情. 一.基 ...

  2. Java-集合&equals;第五题 (Map)设计Account 对象如下: private long id&semi; private double balance&semi; private String password&semi; 要求完善设计,使得该Account 对象能够自动分配id。 给定一个List 如下: List list &equals; new ArrayList&lpar;&rpar;&semi; list&period;add&lpar;new A

    第五题 (Map)设计Account 对象如下: private long id; private double balance; private String password; 要求完善设计,使得 ...

  3. 35.按要求编写Java程序: (1)编写一个接口:InterfaceA,只含有一个方法int method&lpar;int n&rpar;; (2)编写一个类:ClassA来实现接口InterfaceA,实现int method&lpar;int n&rpar;接口方 法时,要求计算1到n的和; (3)编写另一个类:ClassB来实现接口InterfaceA,实现int method&lpar;int n&rpar;接口 方法时,要求计算n的阶乘(n

      35.按要求编写Java程序: (1)编写一个接口:InterfaceA,只含有一个方法int method(int n): (2)编写一个类:ClassA来实现接口InterfaceA,实现in ...

  4. 自定义控件(视图)2期笔记10:自定义视图之View事件分发机制(&quot&semi;瀑布流&quot&semi;的案例)

    1. Touch事件的传递:   图解Touch事件的传递,如下: 当我们点击子View 02内部的Button控件时候,我们就触发了Touch事件. • 这个Touch事件首先传递给了*父View ...

  5. 自定义控件(视图)2期笔记09:自定义视图之继承自ViewGroup(仿ViewPager效果案例)

    1. 这里我们继承已有ViewGroup实现自定义控件,模拟出来ViewPager的效果,如下: (1)实现的效果图如下: (2)实现步骤: • 自定义view继承viewGroup • 重写onLa ...

  6. 自定义控件(视图)2期笔记01:自定义控件之自定义View的步骤

    1. 根据Android Developers官网的介绍,自定义控件你需要以下的步骤: (1)创建View (2)处理View的布局 (3)绘制View (4)与用户进行交互 (5)优化已定义的Vie ...

  7. java中文乱码解决之道(二)—–字符编码详解:基础知识 &plus; ASCII &plus; GB&ast;&ast;

    原文出处:http://cmsblogs.com/?p=1412 在上篇博文(java中文乱码解决之道(一)—–认识字符集)中,LZ简单介绍了主流的字符编码,对各种编码都是点到为止,以下LZ将详细阐述 ...

  8. OSGi 系列(一)之什么是 OSGi :Java 语言的动态模块系统

    OSGi 系列(一)之什么是 OSGi :Java 语言的动态模块系统 OSGi 的核心:模块化.动态.基于 OSGi 就可以模块化的开发 java 应用,模块化的部署 java 应用,还可以动态管理 ...

  9. 自定义控件(视图)2期笔记14:自定义视图之View事件分发 dispatchTouchEvent,onTouch,onTouchEvent,onClick逻辑顺序过程

    1. 这里我们先从案例角度说明dispatchTouchEvent,onTouch,onTouchEvent,onClick逻辑顺序过程: (1)首先我们重写一个MyButton 继承自 Button ...

  10. Spring学习(4)IOC容器配置bean:定义与实例化

    一.  IOC容器配置 1. 一些概念 (1)IOC容器: 定义:具有管理对象和管理对象之间的依赖关系的容器. 作用:应用程序无需自己创建对象,对象由IOC容器创建并组装.BeanFactory是IO ...

随机推荐

  1. sh3&period;useradd 添加用户脚本

    1.写一个脚本: 添加10个用户user1到user10,密码同用户名,但要求只有用户不存在的情况下才能添加 #/bin/bash # ..};do if id user$i &> /d ...

  2. Easy Tag Write&lpar;3&period;2&rpar;

    package skyseraph.android.util; /** * @Title : LogUtil.java * @Package : tcl.nfc.phone.util * @Class ...

  3. Hibernate温习(一)

    //从学校出来几个月了,一直用maximo没有使用到Hibernate,趁着周末的空闲时间重新开始学习Hibernate. Hibernate概念: Hibernate是数据库访问层的框架,对JDBC ...

  4. Leetcode 110 Balanced Binary Tree 二叉树

    判断一棵树是否是平衡树,即左右子树的深度相差不超过1. 我们可以回顾下depth函数其实是Leetcode 104 Maximum Depth of Binary Tree 二叉树 /** * Def ...

  5. 夺命雷公狗---node&period;js---10之POST的接收

    首先我们在项目下创建一个表单,代码如下所示: <!DOCTYPE html> <html lang="en"> <head> <meta ...

  6. Java中的集合类

    实线边框的是实现类,比如ArrayList,LinkedList,HashMap等 折线边框的是抽象类,比如AbstractCollection,AbstractList,AbstractMap等, ...

  7. GO的数组及切片

    感觉在向PYTHON学一些数组方面的功能. package main import "fmt" func main() { ]], , , , , , , , , } ] fmt. ...

  8. 城通网盘,千军万马,千脑网盘,119g网盘哪个适合做网赚?

    转载请注明文章来自 [ofiicexie] 网盘网赚已经流行了有一段时间了,国内流行的几个网盘有城通,千军万马,千脑,119g,今天小编写以此文来比较分析一下这几个网盘的优缺点. 这里,我特意做了个这 ...

  9. WinDbg 命令手册

    WinDbg 命令三部曲:(一)WinDbg 命令手册   本文为 Dennis Gao 原创技术文章,发表于博客园博客,未经作者本人允许禁止任何形式的转载. 系列博文 <WinDbg 命令三部 ...

  10. 将ImageList中的图片转化成byte数组

    Image imgwd = this.imageList1.Images["wd.png"]; var bytes = ImageToBytes(imgwd); public by ...