feat: 초기 프로젝트 설정 및 룰.md 파일 추가

This commit is contained in:
2025-07-28 09:53:31 +09:00
commit 09a4d38512
8165 changed files with 1021855 additions and 0 deletions

View File

@@ -0,0 +1,46 @@
# Config Codec
This module implements a codec for reading and writing git config files (this
includes the .gitmodules file). As far as I can tell, this is a variant of
the INI format.
## codec.decode(ini) -> config
Given the text of the config file, return the data as an object.
The following config:
```ini
[user]
name = Tim Caswell
email = tim@creationix.com
[color]
ui = true
[color "branch"]
current = yellow bold
local = green bold
remote = cyan bold
```
Will parse to this js object
```js
{
user: {
name: "Tim Caswell",
email: "tim@creationix.com"
},
color: {
ui: "true",
branch: {
current: "yellow bold",
local: "green bold",
remote: "cyan bold"
}
}
}
```
## codec.encode(config) -> ini
This reverses the conversion and writes a string from a config object.

11
api.hyungi.net/node_modules/js-git/doc/lib/deflate.md generated vendored Normal file
View File

@@ -0,0 +1,11 @@
# Deflate
This module implements a simple interface that when normal given data, returns the deflated version in a callback. This wraps the pako dependency.
## deflate(inflated) => deflated
```js
var deflate = require('js-git/lib/deflate');
var deflated = deflate(original);
```

View File

@@ -0,0 +1,23 @@
# Inflate Stream
This module implements zlib inflate by hand with a special streaming interface.
This is used in js-git to inflate git object fragments in a pack-stream.
## inflateStream(onEmit, onUnused) -> onInput
```js
var onInput = inflateStream(onEmit, onUnused);
someStream.on("data", function (chunk) {
onInput(null, chunk);
});
function onEmit(err, out) {
if (err) throw err;
// out is a chunk of inflated data
}
function onUnused(chunks) {
// chunks is an array of extra buffers or buffer slices.
}
```

11
api.hyungi.net/node_modules/js-git/doc/lib/inflate.md generated vendored Normal file
View File

@@ -0,0 +1,11 @@
# Inflate
This module implements a simple interface that when given deflated data returns the inflated version.
## inflate(deflated) -> inflated
```js
var inflate = require('js-git/lib/inflate');
var inflated = inflate(deflated);
```

View File

@@ -0,0 +1,127 @@
# Object Codec
This module implements a codec for the binary git object format for blobs, trees, tags, and commits.
This library is useful for writing new storage backends. Normal users will probably
just use one of the existing mixins for object storage.
## codec.frame({type,body}) -> buffer
This function accepts an object with `type` and `body` properties. The `type`
property must be one of "blob", "tree", "commit" or "tag". The body can be a
pre-encoded raw-buffer or a plain javascript value. See encoder docs below for
the formats of the different body types.
The returned binary value is the fully framed git object. The sha1 of this is
the git hash of the object.
```js
var codec = require('js-git/lib/object-codec');
var sha1 = require('git-sha1');
var bin = codec.frame({ type: "blob", body: "Hello World\n"});
var hash = sha1(bin);
```
## codec.deframe(buffer, decode) -> {type,body}
This function accepts a binary git buffer and returns the `{type,body}` object.
If `decode` is true, then the body will also be decoded into a normal javascript
value. If `decode` is false or missing, then the raw-buffer will be in body.
## codec.encoders
This is an object containing 4 encoder function Each function has the signature:
encode(body) -> raw-buffer
Where body is the JS representation of the type and raw-buffer is the git encoded
version of that value, but without the type and length framing.
```js
var encoders = require('js-git/lib/object-codec').encoders;
var modes = require('js-git/lib/modes');
```
Blobs must be native binary values (Buffer in node, Uint8Array in browser).
It's recommended to either use the `bodec` library to create binary values from
strings directly or configure your system with the `formats` mixin that allows
for unicode strings when working with blobs.
```js
rawBin = encoders.blob(new Uint8Array([1,2,3,4,5,6]));
rawBin = encoders.blob(bodec.fromUnicode("Hello World"));
```
Trees are objects with filename as key and object with {mode,hash} as value.
The modes are integers. It's best to use the modes module to help.
```js
rawBin = encoders.tree({ "greeting.txt": {
mode: modes.file,
hash: blobHash
}});
```
Commits are objects with required fields {tree,author,message}
Also if there is a single parent, you specify it with `parent`.
Since a commit can have zero or more parent commits, you specify the parent
hashes via the `parents` property as an array of hashes.
The `author` field is required and contains {name,email,date}.
Commits also require a `committer` field with the same structure as `author`.
The `date` property of `author` and `committer` is in the format {seconds,offset}
Where seconds is a unix timestamp in seconds and offset is the number of minutes
offset for the timezone. (Your local offset can be found with `(new Date).getTimezoneOffset()`)
The `message` field is mandatory and a simple string.
```js
rawBin = encoders.commit({
tree: treeHash,
author: {
name: "Tim Caswell",
email: "tim@creationix.com",
date: {
seconds: 1391790910,
offset: 7 * 60
}
},
parents: [ parentCommitHash ],
message: "This is a test commit\n"
});
```
Annotated tags are like commits, except they have different fields.
```js
rawBin = encoders.tag({
object: commitHash,
type: "commit",
tag: "mytag",
tagger: {
name: "Tim Caswell",
email: "tim@creationix.com",
date: {
seconds: 1391790910,
offset: 7 * 60
}
},
message: "Tag it!\n"
});
```
## codec.decoders
This is just like `codec.encoders` except these functions do the opposite.
They have the format:
decode(raw-buffer) -> body
```js
var commit = decoders.commit(rawCommitBin);
```

View File

@@ -0,0 +1,98 @@
# Pack Codec
This module implements a codec for packfile streams used in the git network
protocols as well as the on-disk packfile format.
These are a sync stream transforms. It accepts an emit function and returns a
write function. Both of these have the same interface. You signal `end` to the
input side by writing undefined (or nothing) and when emit gets called with
undefined that is `end` on the output.
Since this is sync, errors are simply thrown. If you want to use this in the
context of an async stream with back-pressure, it's up to the consumer to handle
exceptions and write to the input at the correct rate. Basically to implement
back-pressure, you only need to keep writing values to the input till enough
data comes out the output. It's sync so by the time `write()` returns, `emit()`
will have been called as many times as it ever will (without more writes).
Here is an example of using the decodePack in a node push stream that ignores
backpressure.
```js
var decodePack = require('js-git/lib/pack-codec').decodePack;
var write = decodePack(onItem);
stream.on("data", write);
stream.on("end", write);
var meta;
function onItem(item) {
if (item === undefined) {
// END of Stream
}
else if (meta === undefined) {
meta = item;
}
else {
console.log(item);
}
}
```
The first output is the meta object:
```js
{
version: 2
num: num-of-objects,
}
```
## codec.decodePack(emit) -> write
Input in this is the raw buffer chunks in the packstream. The chunks can be
broken up at any point so this is ideal for streaming from a disk or network.
Version is the git pack protocol version, and num is the number of objects that
will be in this stream.
All output objects after this will be raw git objects.
```js
{
type: type,
size: buffer-size,
body: raw-buffer,
offset: offset-in-stream,
[ref: number-or-hash]
}
```
There are two extra types here that aren't seen elsewhere. They are `ofs-delta`
and `ref-delta`. In both cases, these are a diff that applies on top of another
object in the stream. The different is `ofs-delta` stores a number in `ref`
that is the number of bytes to go back in the stream to find the base object.
But `ref-delta` includes the full hash of it's base object.
## codec.encodePack(emit) -> write
This is the reverse. In fact, if you fed this the output from `decodePack`,
it's output should match exactly the original stream.
The objects don't need as much data as the parser outputs. In specefic, the meta
object only need contain:
```js
{ num: num-of-objects }
```
And the items only need contain:
```js
{
type: type,
body: raw-buffer,
[ref: number-or-hash]
}
```