Skip to content

Update nbconvert & fix title level #767

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/modules/ROOT/pages/tutorials/centrality-algorithms.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,8 @@ gds.run_cypher(

gds.run_cypher(
"""
UNWIND $rels AS rel
MATCH (source:City {node_id: rel.Origin}), (target:City {node_id: rel.Destination})
UNWIND $rels AS rel
MATCH (source:City {node_id: rel.Origin}), (target:City {node_id: rel.Destination})
CREATE (source)-[:HAS_FLIGHT_TO]->(target)
""",
params={"rels": routes_df.to_dict("records")},
Expand Down
6 changes: 3 additions & 3 deletions doc/modules/ROOT/pages/tutorials/community-detection.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -114,8 +114,8 @@ gds.run_cypher(

gds.run_cypher(
"""
UNWIND $rels AS rel
MATCH (source:Subreddit {name: rel.SOURCE_SUBREDDIT}), (target:Subreddit {name: rel.TARGET_SUBREDDIT})
UNWIND $rels AS rel
MATCH (source:Subreddit {name: rel.SOURCE_SUBREDDIT}), (target:Subreddit {name: rel.TARGET_SUBREDDIT})
CREATE (source)-[:HYPERLINKED_TO]->(target)
""",
params={"rels": relationship_df.to_dict("records")},
Expand Down Expand Up @@ -232,7 +232,7 @@ We can also check that the property was written by the below command.
----
gds.run_cypher(
"""
MATCH (n) WHERE 'louvainCommunityId' IN keys(n)
MATCH (n) WHERE 'louvainCommunityId' IN keys(n)
RETURN n.name, n.louvainCommunityId LIMIT 10
"""
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ part with something like a
https://neo4j.com/docs/graph-data-science-client/current/graph-object/#_projecting_a_graph_object[projection
from a Neo4j database] to create a more realistic production workflow.

== Comparison with other methods
=== Comparison with other methods

As mentioned we tried to mimic the setup of the benchmarks in the
NeurIPS paper
Expand Down Expand Up @@ -381,7 +381,7 @@ descent compared to the deep learning models above - HashGNN is an
unsupervised algorithm - HashGNN runs a lot faster (even without a GPU)
and requires a lot less memory

== Further learning
=== Further learning

To learn more about the topics covered in this notebook, please check
out the following pages of the GDS manual:
Expand Down
18 changes: 9 additions & 9 deletions examples/centrality-algorithms.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"\n",
"This notebook will show how you can apply eigenvector centrality, betweenness centrality, degree centrality and closeness centrality on a graph dataset.\n",
"\n",
"### Setup\n",
"## Setup\n",
"\n",
"We start by importing our dependencies and setting up our GDS client connection to the database."
]
Expand Down Expand Up @@ -92,7 +92,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Importing the dataset\n",
"## Importing the dataset\n",
"\n",
"We import the dataset as a pandas dataframe first. We deal with two files here. The file `reachability-meta.csv.gz` stores the names of the cities and their information while the file `reachability.txt.gz` stores the edges of the graph. An edge exists from city `i` to city `j` if the estimated airline travel time is less than a threshold.\n"
]
Expand Down Expand Up @@ -146,8 +146,8 @@
"\n",
"gds.run_cypher(\n",
" \"\"\"\n",
" UNWIND $rels AS rel \n",
" MATCH (source:City {node_id: rel.Origin}), (target:City {node_id: rel.Destination}) \n",
" UNWIND $rels AS rel\n",
" MATCH (source:City {node_id: rel.Origin}), (target:City {node_id: rel.Destination})\n",
" CREATE (source)-[:HAS_FLIGHT_TO]->(target)\n",
" \"\"\",\n",
" params={\"rels\": routes_df.to_dict(\"records\")},\n",
Expand All @@ -174,7 +174,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Eigenvector Centrality\n",
"## Eigenvector Centrality\n",
"\n",
"[Eigenvector centrality](https://neo4j.com/docs/graph-data-science/current/algorithms/eigenvector-centrality/) measures the importance or influence of a node based on its connections to other nodes in the network. A higher eigenvector centrality score suggests that a node is more central and influential within the network.\n",
"\n",
Expand Down Expand Up @@ -289,7 +289,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Betweenness Centrality\n",
"## Betweenness Centrality\n",
"\n",
"[Betweenness Centrality](https://neo4j.com/docs/graph-data-science/current/algorithms/betweenness-centrality/) quantifies the importance of a node as a bridge or intermediary in the network. It measures how often a node lies on the shortest path between other pairs of nodes. \n",
"\n",
Expand Down Expand Up @@ -367,7 +367,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Degree Centrality\n",
"## Degree Centrality\n",
"\n",
"[Degree Centrality](https://neo4j.com/docs/graph-data-science/current/algorithms/degree-centrality/) measures the number of connections (edges) a node has in the network. \n",
"\n",
Expand Down Expand Up @@ -445,7 +445,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Cleanup\n",
"## Cleanup\n",
"\n",
"Before finishing we can clean up the example data from both the GDS in-memory state and the database."
]
Expand Down Expand Up @@ -474,7 +474,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### References\n",
"## References\n",
"- For the network:\n",
"Brendan J. Frey and Delbert Dueck. \"Clustering by passing messages between data points.\" Science 315.5814 (2007): 972-976.\n",
"\n",
Expand Down
20 changes: 10 additions & 10 deletions examples/community-detection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
"\n",
"The tasks we cover here include performing initial graph preprocessing using Weakly Connected Components and then performing community detection on the largest component using the Louvain algorithm.\n",
"\n",
"### Setup\n",
"## Setup\n",
"\n",
"We start by importing our dependencies and setting up our GDS client connection to the database."
]
Expand Down Expand Up @@ -100,7 +100,7 @@
"id": "48bd8af1",
"metadata": {},
"source": [
"### Importing the dataset\n",
"## Importing the dataset\n",
"\n",
"We import the dataset as a pandas dataframe first. We work with only a subset of the dataset. The sampled data is only till 1st March 2014. "
]
Expand Down Expand Up @@ -187,8 +187,8 @@
"\n",
"gds.run_cypher(\n",
" \"\"\"\n",
" UNWIND $rels AS rel \n",
" MATCH (source:Subreddit {name: rel.SOURCE_SUBREDDIT}), (target:Subreddit {name: rel.TARGET_SUBREDDIT}) \n",
" UNWIND $rels AS rel\n",
" MATCH (source:Subreddit {name: rel.SOURCE_SUBREDDIT}), (target:Subreddit {name: rel.TARGET_SUBREDDIT})\n",
" CREATE (source)-[:HYPERLINKED_TO]->(target)\n",
" \"\"\",\n",
" params={\"rels\": relationship_df.to_dict(\"records\")},\n",
Expand Down Expand Up @@ -226,7 +226,7 @@
"id": "9c259471",
"metadata": {},
"source": [
"### Weakly Connected Components\n",
"## Weakly Connected Components\n",
"\n",
"A graph dataset need not always be connected. That is, there may not exist a path from every node to \n",
"every other node in the graph dataset (subgraphs in it may not be connected to each other at all). Hence, we \n",
Expand Down Expand Up @@ -332,7 +332,7 @@
"id": "17942d04",
"metadata": {},
"source": [
"### Community Detection using Louvain\n",
"## Community Detection using Louvain\n",
"\n",
"We use the [Louvain](https://neo4j.com/docs/graph-data-science/current/algorithms/louvain/) algorithm to detect communities in our subgraph and assign a `louvainCommunityId` to each community."
]
Expand Down Expand Up @@ -382,7 +382,7 @@
"source": [
"gds.run_cypher(\n",
" \"\"\"\n",
" MATCH (n) WHERE 'louvainCommunityId' IN keys(n) \n",
" MATCH (n) WHERE 'louvainCommunityId' IN keys(n)\n",
" RETURN n.name, n.louvainCommunityId LIMIT 10\n",
" \"\"\"\n",
")"
Expand Down Expand Up @@ -424,7 +424,7 @@
"id": "5ed56f82",
"metadata": {},
"source": [
"### Further ideas\n",
"## Further ideas\n",
"\n",
"* Inspect the produced communities using [Bloom](https://neo4j.com/docs/bloom-user-guide/current/). You can use rule-based styling based on the community property.\n",
"* Try to tune more parameters of Louvain and see how the communities differ.\n",
Expand All @@ -437,7 +437,7 @@
"id": "6e00ed7b",
"metadata": {},
"source": [
"### Cleanup\n",
"## Cleanup\n",
"\n",
"Before finishing we can clean up the example data from both the GDS in-memory state and the database."
]
Expand Down Expand Up @@ -471,7 +471,7 @@
"id": "65dcb952",
"metadata": {},
"source": [
"### References\n",
"## References\n",
"\n",
"Srijan Kumar, William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2018. Community Interaction and Conflict on the Web. In Proceedings of the 2018 World Wide Web Conference (WWW '18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 933–943. https://doi.org/10.1145/3178876.3186141"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@
"id": "e9d0dbcc",
"metadata": {},
"source": [
"### The HashGNN node embedding algorithm\n",
"## The HashGNN node embedding algorithm\n",
"\n",
"As the last part of the training pipeline, there will be an ML training algorithm.\n",
"If we use the `plot_keywords` directly as our feature input to the ML algorithm, we will not utilize any of the relationship data we have in our graph.\n",
Expand Down
2 changes: 1 addition & 1 deletion requirements/dev/dev.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
ruff == 0.6.1
mypy == 1.8.0
nbconvert == 7.11.0
nbconvert == 7.16.4
pandas-stubs == 2.2.2.240603
tox == 4.11.3
types-setuptools == 68.1.0.1
Expand Down